Multi-attentional Deepfake Detection

Authors: Hanqing Zhao, Wenbo Zhou, Dongdong Chen, Tianyi Wei, Weiming Zhang, Nenghai Yu

Published: 2021-03-03 13:56:14+00:00

AI Summary

This paper proposes a multi-attentional deepfake detection network that formulates deepfake detection as a fine-grained classification problem, addressing the limitations of existing methods that rely on global features. The network uses multiple spatial attention heads to focus on local discriminative features, enhanced textural features, and a novel regional independence loss for improved training.

Abstract

Face forgery by deepfake is widely spread over the internet and has raised severe societal concerns. Recently, how to detect such forgery contents has become a hot research topic and many deepfake detection methods have been proposed. Most of them model deepfake detection as a vanilla binary classification problem, i.e, first use a backbone network to extract a global feature and then feed it into a binary classifier (real/fake). But since the difference between the real and fake images in this task is often subtle and local, we argue this vanilla solution is not optimal. In this paper, we instead formulate deepfake detection as a fine-grained classification problem and propose a new multi-attentional deepfake detection network. Specifically, it consists of three key components: 1) multiple spatial attention heads to make the network attend to different local parts; 2) textural feature enhancement block to zoom in the subtle artifacts in shallow features; 3) aggregate the low-level textural feature and high-level semantic features guided by the attention maps. Moreover, to address the learning difficulty of this network, we further introduce a new regional independence loss and an attention guided data augmentation strategy. Through extensive experiments on different datasets, we demonstrate the superiority of our method over the vanilla binary classifier counterparts, and achieve state-of-the-art performance.


Key findings
The proposed multi-attentional network outperforms existing methods on FaceForensics++ and DFDC datasets, achieving state-of-the-art performance. The method also demonstrates good transferability when trained on one dataset and tested on another. The ablation study shows the effectiveness of multiple attention heads, the regional independence loss, and attention-guided data augmentation.
Approach
The proposed method uses a multi-attentional network with three key components: multiple spatial attention heads to focus on local regions, a textural feature enhancement block to preserve subtle artifacts, and bilinear attention pooling to aggregate low-level and high-level features. A regional independence loss and attention-guided data augmentation are also introduced to improve training.
Datasets
FaceForensics++, Celeb-DF, DFDC
Model(s)
EfficientNet-b4 (primarily), also mentions XceptionNet
Author countries
China, United States