FTFDNet: Learning to Detect Talking Face Video Manipulation with Tri-Modality Interaction

Authors: Ganglai Wang, Peng Zhang, Junwen Xiong, Feihan Yang, Wei Huang, Yufei Zha

Published: 2023-07-08 14:45:16+00:00

AI Summary

FTFDNet, a novel fake talking face detection network, is proposed to improve DeepFake video detection by incorporating visual, audio, and motion features using a cross-modal fusion (CMF) module and an audio-visual attention mechanism (AVAM). FTFDNet outperforms state-of-the-art methods on established datasets, including a new large-scale Fake Talking Face Detection Dataset (FTFDD).

Abstract

DeepFake based digital facial forgery is threatening public media security, especially when lip manipulation has been used in talking face generation, and the difficulty of fake video detection is further improved. By only changing lip shape to match the given speech, the facial features of identity are hard to be discriminated in such fake talking face videos. Together with the lack of attention on audio stream as the prior knowledge, the detection failure of fake talking face videos also becomes inevitable. It's found that the optical flow of the fake talking face video is disordered especially in the lip region while the optical flow of the real video changes regularly, which means the motion feature from optical flow is useful to capture manipulation cues. In this study, a fake talking face detection network (FTFDNet) is proposed by incorporating visual, audio and motion features using an efficient cross-modal fusion (CMF) module. Furthermore, a novel audio-visual attention mechanism (AVAM) is proposed to discover more informative features, which can be seamlessly integrated into any audio-visual CNN architecture by modularization. With the additional AVAM, the proposed FTFDNet is able to achieve a better detection performance than other state-of-the-art DeepFake video detection methods not only on the established fake talking face detection dataset (FTFDD) but also on the DeepFake video detection datasets (DFDC and DF-TIMIT).


Key findings
FTFDNet significantly outperforms existing DeepFake detection methods on various datasets. The inclusion of audio and motion features, along with the AVAM, proves crucial for accurate detection, especially in challenging fake talking face videos. The new FTFDD dataset provides a valuable resource for future research in this area.
Approach
FTFDNet uses three-stream encoders to extract features from visual, audio, and motion modalities. A cross-modal fusion module combines these features, and an audio-visual attention mechanism refines them, improving detection accuracy. A classifier then determines if the video is real or fake.
Datasets
Fake Talking Face Detection Dataset (FTFDD), DeepFake Detection Challenge (DFDC), DF-TIMIT, FaceForensics++, Celeb-DF, VoxCeleb2, VidTIMIT
Model(s)
VGG-based architecture with modifications for each modality (visual, audio, motion), Cross-Modal Fusion (CMF) module, Audio-Visual Attention Mechanism (AVAM)
Author countries
China