Deepfake Video Detection with Spatiotemporal Dropout Transformer

Authors: Daichi Zhang, Fanzhao Lin, Yingying Hua, Pengju Wang, Dan Zeng, Shiming Ge

Published: 2022-07-14 02:04:42+00:00

AI Summary

This paper introduces a novel deepfake video detection approach using a spatiotemporal dropout transformer. It leverages patch-level spatiotemporal inconsistencies in facial regions across frames, improving robustness and generalization compared to existing methods.

Abstract

While the abuse of deepfake technology has caused serious concerns recently, how to detect deepfake videos is still a challenge due to the high photo-realistic synthesis of each frame. Existing image-level approaches often focus on single frame and ignore the spatiotemporal cues hidden in deepfake videos, resulting in poor generalization and robustness. The key of a video-level detector is to fully exploit the spatiotemporal inconsistency distributed in local facial regions across different frames in deepfake videos. Inspired by that, this paper proposes a simple yet effective patch-level approach to facilitate deepfake video detection via spatiotemporal dropout transformer. The approach reorganizes each input video into bag of patches that is then fed into a vision transformer to achieve robust representation. Specifically, a spatiotemporal dropout operation is proposed to fully explore patch-level spatiotemporal cues and serve as effective data augmentation to further enhance model's robustness and generalization ability. The operation is flexible and can be easily plugged into existing vision transformers. Extensive experiments demonstrate the effectiveness of our approach against 25 state-of-the-arts with impressive robustness, generalizability, and representation ability.


Key findings
The proposed spatiotemporal dropout transformer consistently outperforms 25 state-of-the-art methods on three benchmark datasets. It demonstrates impressive robustness to various augmentations and strong cross-dataset generalization ability. Visualization of learned representations shows distinct clustering of real and fake videos.
Approach
The approach reorganizes input videos into a "bag of patches" which are fed into a vision transformer. A spatiotemporal dropout operation is used to explore patch-level spatiotemporal cues and serves as data augmentation, enhancing robustness and generalization.
Datasets
FaceForensics++ (FF++), DFDC, Celeb-DF(v2)
Model(s)
Vision Transformer (ViT-Base-16), MobileNetV5 (for face detection)
Author countries
China