Understanding Audiovisual Deepfake Detection: Techniques, Challenges, Human Factors and Perceptual Insights

Authors: Ammarah Hashmi, Sahibzada Adil Shahzad, Chia-Wen Lin, Yu Tsao, Hsin-Min Wang

Published: 2024-11-12 09:02:11+00:00

AI Summary

This survey paper provides a comprehensive review of state-of-the-art techniques for audiovisual deepfake detection. It explores both deepfake generation methods and advanced detection strategies that combine audio and visual modalities, highlighting their strengths, limitations, and the role of human perception. The paper also discusses existing open-source datasets and outlines future research directions to combat the growing threat of deepfakes.

Abstract

Deep Learning has been successfully applied in diverse fields, and its impact on deepfake detection is no exception. Deepfakes are fake yet realistic synthetic content that can be used deceitfully for political impersonation, phishing, slandering, or spreading misinformation. Despite extensive research on unimodal deepfake detection, identifying complex deepfakes through joint analysis of audio and visual streams remains relatively unexplored. To fill this gap, this survey first provides an overview of audiovisual deepfake generation techniques, applications, and their consequences, and then provides a comprehensive review of state-of-the-art methods that combine audio and visual modalities to enhance detection accuracy, summarizing and critically analyzing their strengths and limitations. Furthermore, we discuss existing open source datasets for a deeper understanding, which can contribute to the research community and provide necessary information to beginners who want to analyze deep learning-based audiovisual methods for video forensics. By bridging the gap between unimodal and multimodal approaches, this paper aims to improve the effectiveness of deepfake detection strategies and guide future research in cybersecurity and media integrity.


Key findings
The survey concludes that multimodal deepfake detection systems, leveraging both audio and visual cues, significantly enhance accuracy and robustness compared to unimodal approaches. Key challenges include the poor generalization of existing models to unseen deepfake techniques and real-world noisy conditions, necessitating more diverse datasets and computationally efficient model architectures. The paper also highlights the potential of integrating human perceptual insights into algorithmic models to further improve detection efficacy.
Approach
The paper surveys existing audiovisual deepfake detection methods, categorizing them into synchronization-based, feature fusion, ensemble, and temporal analysis approaches. It analyzes their strengths and limitations, and discusses relevant datasets, performance metrics, and the role of human perception to guide future research.
Datasets
DFDC, FakeAVCeleb, LAV-DF, AV-Deepfake1M, PolyGlotFake
Model(s)
CNN-based networks, Transformer models (Dense Swin Transformer, Multiscale Vision Transformer, AV-HuBERT), Recurrent Neural Networks (RNNs), GANs (for generation, but also indirectly for detection by identifying GAN artifacts), VAEs (for generation), various fusion strategies, self-supervised learning (SSL) based approaches, 3D Convolutional Neural Networks (3DCNN).
Author countries
Taiwan