Mover: Mask and Recovery based Facial Part Consistency Aware Method for Deepfake Video Detection

Authors: Juan Hu, Xin Liao, Difei Gao, Satoshi Tsutsui, Qian Wang, Zheng Qin, Mike Zheng Shou

Published: 2023-03-03 06:57:22+00:00

AI Summary

Mover is a novel deepfake detection model that leverages inconsistencies in facial parts, a common weakness in deepfake videos. It uses a masked autoencoder to learn facial part consistency from real images and then a dual-network approach to maximize the discrepancy between real and fake videos.

Abstract

Deepfake techniques have been widely used for malicious purposes, prompting extensive research interest in developing Deepfake detection methods. Deepfake manipulations typically involve tampering with facial parts, which can result in inconsistencies across different parts of the face. For instance, Deepfake techniques may change smiling lips to an upset lip, while the eyes remain smiling. Existing detection methods depend on specific indicators of forgery, which tend to disappear as the forgery patterns are improved. To address the limitation, we propose Mover, a new Deepfake detection model that exploits unspecific facial part inconsistencies, which are inevitable weaknesses of Deepfake videos. Mover randomly masks regions of interest (ROIs) and recovers faces to learn unspecific features, which makes it difficult for fake faces to be recovered, while real faces can be easily recovered. Specifically, given a real face image, we first pretrain a masked autoencoder to learn facial part consistency by dividing faces into three parts and randomly masking ROIs, which are then recovered based on the unmasked facial parts. Furthermore, to maximize the discrepancy between real and fake videos, we propose a novel model with dual networks that utilize the pretrained encoder and masked autoencoder, respectively. 1) The pretrained encoder is finetuned for capturing the encoding of inconsistent information in the given video. 2) The pretrained masked autoencoder is utilized for mapping faces and distinguishing real and fake videos. Our extensive experiments on standard benchmarks demonstrate that Mover is highly effective.


Key findings
Mover demonstrates high effectiveness in deepfake detection across various benchmark datasets, achieving state-of-the-art or comparable performance. The model shows robustness to post-processing operations and generalizes well to unseen deepfake patterns and datasets. Ablation studies confirm the importance of each component of the proposed architecture.
Approach
Mover uses a two-stage approach. The first stage pre-trains a masked autoencoder on real images to learn facial part consistency by masking and recovering facial regions. The second stage uses a dual-network architecture (Finetuning Network and Mapping Network) to leverage the pre-trained model and distinguish between real and fake videos based on the learned consistencies.
Datasets
FaceForensics++, Celeb-DF, WildDeepfake, DFDC preview
Model(s)
Masked Autoencoder (MAE) with Vision Transformer encoder and Transformer decoder, ResNet-18
Author countries
China, Singapore