From Specificity to Generality: Revisiting Generalizable Artifacts in Detecting Face Deepfakes

Authors: Long Ma, Zhiyuan Yan, Yize Chen, Jin Xu, Qinglang Guo, Hu Huang, Yong Liao, Hui Lin

Published: 2025-04-07 08:34:28+00:00

AI Summary

This paper proposes a universal deepfake detection framework focusing on common artifacts across various face forgeries. It categorizes these artifacts into Face Inconsistency Artifacts (FIA) and Up-Sampling Artifacts (USA) and introduces a data-level pseudo-fake creation framework to generate training data exhibiting only FIA and USA, enabling a standard image classifier to generalize well to unseen deepfakes.

Abstract

Detecting deepfakes has been an increasingly important topic, especially given the rapid development of AI generation techniques. In this paper, we ask: How can we build a universal detection framework that is effective for most facial deepfakes? One significant challenge is the wide variety of deepfake generators available, resulting in varying forgery artifacts (e.g., lighting inconsistency, color mismatch, etc). But should we ``teach the detector to learn all these artifacts separately? It is impossible and impractical to elaborate on them all. So the core idea is to pinpoint the more common and general artifacts across different deepfakes. Accordingly, we categorize deepfake artifacts into two distinct yet complementary types: Face Inconsistency Artifacts (FIA) and Up-Sampling Artifacts (USA). FIA arise from the challenge of generating all intricate details, inevitably causing inconsistencies between the complex facial features and relatively uniform surrounding areas. USA, on the other hand, are the inevitable traces left by the generator's decoder during the up-sampling process. This categorization stems from the observation that all existing deepfakes typically exhibit one or both of these artifacts. To achieve this, we propose a new data-level pseudo-fake creation framework that constructs fake samples with only the FIA and USA, without introducing extra less-general artifacts. Specifically, we employ a super-resolution to simulate the USA, while design a Blender module that uses image-level self-blending on diverse facial regions to create the FIA. We surprisingly found that, with this intuitive design, a standard image classifier trained only with our pseudo-fake data can non-trivially generalize well to unseen deepfakes.


Key findings
The proposed method outperforms state-of-the-art methods on multiple deepfake datasets, demonstrating strong generalization capabilities to both traditional and generative deepfakes. Ablation studies confirm the effectiveness of the proposed FIA-USA data augmentation, Automatic Forgery-aware Feature Selection (AFFS), and Region-aware Contrastive Regularization (RCR) components. The model also shows robustness to unseen perturbations.
Approach
The approach uses a novel data augmentation method, FIA-USA, to generate pseudo-fake images containing only Face Inconsistency Artifacts (FIA) and Up-Sampling Artifacts (USA). A standard image classifier is trained on this data and surprisingly generalizes well to unseen deepfakes.
Datasets
FaceForensics++ (FF++) (c23 version), DeepfakeDetection (DFD), Deepfake Detection Challenge (DFDC), Deepfake Detection Challenge preview (DFDCP), CelebDF (CDF), Diffusion Facial Forgery (DiFF), DF40
Model(s)
EfficientNetB4 (primarily), ResNet, ResNet-34, other models explored for ablation studies
Author countries
China