Counterfactual Explanations for Face Forgery Detection via Adversarial Removal of Artifacts

Authors: Yang Li, Songlin Yang, Wei Wang, Ziwen He, Bo Peng, Jing Dong

Published: 2024-04-12 09:13:37+00:00

AI Summary

This paper proposes a novel counterfactual explanation method for face forgery detection by adversarially removing artifacts from deepfake images. The method inverts forgery images into StyleGAN latent space, then adversarially optimizes latent representations to mislead detection models, visualizing removed artifacts and demonstrating transferable adversarial attacks.

Abstract

Highly realistic AI generated face forgeries known as deepfakes have raised serious social concerns. Although DNN-based face forgery detection models have achieved good performance, they are vulnerable to latest generative methods that have less forgery traces and adversarial attacks. This limitation of generalization and robustness hinders the credibility of detection results and requires more explanations. In this work, we provide counterfactual explanations for face forgery detection from an artifact removal perspective. Specifically, we first invert the forgery images into the StyleGAN latent space, and then adversarially optimize their latent representations with the discrimination supervision from the target detection model. We verify the effectiveness of the proposed explanations from two aspects: (1) Counterfactual Trace Visualization: the enhanced forgery images are useful to reveal artifacts by visually contrasting the original images and two different visualization methods; (2) Transferable Adversarial Attacks: the adversarial forgery images generated by attacking the detection model are able to mislead other detection models, implying the removed artifacts are general. Extensive experiments demonstrate that our method achieves over 90% attack success rate and superior attack transferability. Compared with naive adversarial noise methods, our method adopts both generative and discriminative model priors, and optimize the latent representations in a synthesis-by-analysis way, which forces the search of counterfactual explanations on the natural face manifold. Thus, more general counterfactual traces can be found and better adversarial attack transferability can be achieved.


Key findings
The method achieved over 90% attack success rate and superior attack transferability. Visualizations effectively revealed artifacts, and the results showed that removing artifacts led to successful evasion of multiple detection models, highlighting the generality of the removed traces. The M-level style codes (related to facial features) were found to be most relevant to forgery features.
Approach
The approach uses a fine-tuned encoder to map forgery images to StyleGAN's latent space. Then, it adversarially optimizes these latent representations, guided by a target detection model, to remove forgery artifacts. This process generates 'artifact-removed' images used for visualization and transferable adversarial attacks.
Datasets
FF++, DFDC, Celeb-DF(v2)
Model(s)
EfficientNet-b4, Xception, MAT, RECCE, StyleGAN, e4e encoder
Author countries
China