Artificial Intelligence Security Competition (AISC)

Authors: Yinpeng Dong, Peng Chen, Senyou Deng, Lianji L, Yi Sun, Hanyu Zhao, Jiaxing Li, Yunteng Tan, Xinyu Liu, Yangyi Dong, Enhui Xu, Jincai Xu, Shu Xu, Xuelin Fu, Changfeng Sun, Haoliang Han, Xuchong Zhang, Shen Chen, Zhimin Sun, Junyi Cao, Taiping Yao, Shouhong Ding, Yu Wu, Jian Lin, Tianpeng Wu, Ye Wang, Yu Fu, Lin Feng, Kangkang Gao, Zeyu Liu, Yuanzhe Pang, Chengqi Duan, Huipeng Zhou, Yajie Wang, Yuhang Zhao, Shangbo Wu, Haoran Lyu, Zhiyu Lin, Yifei Gao, Shuang Li, Haonan Wang, Jitao Sang, Chen Ma, Junhao Zheng, Yijia Li, Chao Shen, Chenhao Lin, Zhichao Cui, Guoshuai Liu, Huafeng Shi, Kun Hu, Mengxin Zhang

Published: 2022-12-07 02:45:27+00:00

AI Summary

This paper describes the Artificial Intelligence Security Competition (AISC), focusing on three tracks: Deepfake Security, Autonomous Driving Security, and Face Recognition Security. The main contribution is a comprehensive report detailing the competition rules and top-performing solutions for each track, advancing research in AI security.

Abstract

The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.


Key findings
The Deepfake Security Competition showcased effective multi-stage semi-supervised learning methods for deepfake attribution and anomaly detection. The Autonomous Driving Security Competition demonstrated successful adversarial patch generation to fool object detection models. The Face Recognition Security Competition highlighted effective adversarial patch generation to fool face recognition systems.
Approach
The AISC competition evaluated various approaches to AI security, including deepfake detection, adversarial attacks on autonomous driving systems, and adversarial attacks on face recognition systems. Top-performing teams employed techniques like multi-stage semi-supervised learning, pseudo-labeling, and ensemble methods for deepfake detection, while autonomous driving and face recognition attacks focused on generating adversarial patches to fool detection models.
Datasets
Deepfake Security Competition used the Deepfakes Security Challenge (DFSC) dataset and several academic datasets (FaceForensics++, CelebDF, DeeperForensics-1.0, ForgeryNet, FakeAVCeleb). Autonomous Driving Security Competition used Carla simulator generated videos. Face Recognition Security Competition used the LFW dataset.
Model(s)
Various models were used across the different tracks. For Deepfake detection, EfficientNet-B4 and ResNest were employed. Autonomous driving utilized YOLOv3 and Faster R-CNN. Face recognition leveraged models like MobileFaceNet, GhostNet, Arcface, PartialFC, Cosface, Magface, and Adaface.
Author countries
China