DFGC 2022: The Second DeepFake Game Competition

Authors: Bo Peng, Wei Xiang, Yue Jiang, Wei Wang, Jing Dong, Zhenan Sun, Zhen Lei, Siwei Lyu

Published: 2022-06-30 09:13:06+00:00

AI Summary

This paper summarizes the second DeepFake Game Competition (DFGC 2022), which benchmarks state-of-the-art deepfake creation and detection methods in an adversarial setting. The competition used a new, more diverse video dataset and improved evaluation metrics to better reflect real-world scenarios and stimulate research in deepfake defense.

Abstract

This paper presents the summary report on our DFGC 2022 competition. The DeepFake is rapidly evolving, and realistic face-swaps are becoming more deceptive and difficult to detect. On the contrary, methods for detecting DeepFakes are also improving. There is a two-party game between DeepFake creators and defenders. This competition provides a common platform for benchmarking the game between the current state-of-the-arts in DeepFake creation and detection methods. The main research question to be answered by this competition is the current state of the two adversaries when competed with each other. This is the second edition after the last year's DFGC 2021, with a new, more diverse video dataset, a more realistic game setting, and more reasonable evaluation metrics. With this competition, we aim to stimulate research ideas for building better defenses against the DeepFake threats. We also release our DFGC 2022 dataset contributed by both our participants and ourselves to enrich the DeepFake data resources for the research community (https://github.com/NiCE-X/DFGC-2022).


Key findings
Top deepfake creation methods utilized improved merging and post-processing techniques. Top detection methods relied on ensembles of deep models trained on diverse datasets and using data augmentation. Despite advancements, detection models still struggle with generalization, particularly against high-quality deepfakes and unseen data types.
Approach
The competition involved two tracks: DeepFake Creation (DC) and DeepFake Detection (DD). The DC track evaluated the realism and anti-detection capabilities of submitted deepfakes, while the DD track assessed the performance of deepfake detection models on a dataset containing both real and fake videos, including those from the DC track. The competition used a combination of subjective and objective metrics for evaluation.
Datasets
DFGC 2022 dataset (resource dataset, public test set, private test set 1, private test set 2), including data from participants and other sources like YouTube-DF, FaceForensics++, Celeb-DF, DeeperForensics, Kodf, and ForgeryNet.
Model(s)
Various models were used by participants in the detection track, including ensembles of ConvNext, Swin-Transformer, EfficientNet, Vision Transformers, and ResNet50. Creation track participants utilized DeepFaceLab, SimSwap, FaceShifter, FaceSwapper, MegaFS, and InfoSwap, among other methods.
Author countries
China, USA