Detection of Deepfake Videos Using Long Distance Attention

Authors: Wei Lu, Lingyi Liu, Junwei Luo, Xianfeng Zhao, Yicong Zhou, Jiwu Huang

Published: 2021-06-24 08:33:32+00:00

AI Summary

This paper proposes a spatial-temporal model for deepfake video detection, treating the problem as a fine-grained classification task. The model uses a novel long-distance attention mechanism to capture subtle spatial and temporal artifacts, achieving state-of-the-art performance.

Abstract

With the rapid progress of deepfake techniques in recent years, facial video forgery can generate highly deceptive video contents and bring severe security threats. And detection of such forgery videos is much more urgent and challenging. Most existing detection methods treat the problem as a vanilla binary classification problem. In this paper, the problem is treated as a special fine-grained classification problem since the differences between fake and real faces are very subtle. It is observed that most existing face forgery methods left some common artifacts in the spatial domain and time domain, including generative defects in the spatial domain and inter-frame inconsistencies in the time domain. And a spatial-temporal model is proposed which has two components for capturing spatial and temporal forgery traces in global perspective respectively. The two components are designed using a novel long distance attention mechanism. The one component of the spatial domain is used to capture artifacts in a single frame, and the other component of the time domain is used to capture artifacts in consecutive frames. They generate attention maps in the form of patches. The attention method has a broader vision which contributes to better assembling global information and extracting local statistic information. Finally, the attention maps are used to guide the network to focus on pivotal parts of the face, just like other fine-grained classification methods. The experimental results on different public datasets demonstrate that the proposed method achieves the state-of-the-art performance, and the proposed long distance attention method can effectively capture pivotal parts for face forgery.


Key findings
The proposed method achieves state-of-the-art performance on FaceForensics++ and Celeb-DF datasets. The long-distance attention mechanism effectively captures both local and global forgery artifacts. The model shows robustness across different deepfake generation methods and demonstrates good cross-dataset generalization.
Approach
The authors propose a spatial-temporal model with two components: one for capturing spatial artifacts in single frames and another for capturing temporal inconsistencies between frames. Both components utilize a novel long-distance attention mechanism to focus on pivotal regions for improved classification.
Datasets
FaceForensics++ (FF++) and Celeb-DF
Model(s)
Xception with a novel long-distance attention mechanism added as a spatial-temporal model.
Author countries
China, Macau