An Audio-Visual Attention Based Multimodal Network for Fake Talking Face Videos Detection

Authors: Ganglai Wang, Peng Zhang, Lei Xie, Wei Huang, Yufei Zha, Yanning Zhang

Published: 2022-03-10 06:16:11+00:00

AI Summary

This paper proposes FTFDNet, a multimodal network for fake talking face video detection, incorporating audio and visual representations. It introduces an audio-visual attention mechanism (AVAM) to improve feature extraction, achieving over 97% detection accuracy.

Abstract

DeepFake based digital facial forgery is threatening the public media security, especially when lip manipulation has been used in talking face generation, the difficulty of fake video detection is further improved. By only changing lip shape to match the given speech, the facial features of identity is hard to be discriminated in such fake talking face videos. Together with the lack of attention on audio stream as the prior knowledge, the detection failure of fake talking face generation also becomes inevitable. Inspired by the decision-making mechanism of human multisensory perception system, which enables the auditory information to enhance post-sensory visual evidence for informed decisions output, in this study, a fake talking face detection framework FTFDNet is proposed by incorporating audio and visual representation to achieve more accurate fake talking face videos detection. Furthermore, an audio-visual attention mechanism (AVAM) is proposed to discover more informative features, which can be seamlessly integrated into any audio-visual CNN architectures by modularization. With the additional AVAM, the proposed FTFDNet is able to achieve a better detection performance on the established dataset (FTFDD). The evaluation of the proposed work has shown an excellent performance on the detection of fake talking face videos, which is able to arrive at a detection rate above 97%.


Key findings
FTFDNet-AVAM achieved a detection accuracy above 97% on the FTFDD dataset. The audio-visual attention mechanism significantly improved performance compared to using only audio or video, and also outperformed a comparable single-frame based model. The results demonstrate the effectiveness of combining audio and visual information for detecting fake talking face videos.
Approach
The authors propose FTFDNet, which uses separate audio and visual branches to extract features from input audio spectrograms and video frames. These features are concatenated and processed through fully connected layers for classification. An audio-visual attention mechanism (AVAM) is added to improve performance by focusing on informative regions.
Datasets
A newly established dataset, FTFDD, created using VoxCeleb2 and several talking face generation methods (Wav2Lip, MakeItTalk, PC-AVS).
Model(s)
FTFDNet (a multimodal network with audio and visual branches and fully connected layers), FTFDNet-AVAM (FTFDNet with the added audio-visual attention mechanism), MesoInception-4 (used for comparison).
Author countries
China