Understanding Audiovisual Deepfake Detection: Techniques, Challenges, Human Factors and Perceptual Insights

Authors: Ammarah Hashmi, Sahibzada Adil Shahzad, Chia-Wen Lin, Yu Tsao, Hsin-Min Wang

Published: 2024-11-12 09:02:11+00:00

AI Summary

This survey paper comprehensively reviews state-of-the-art methods for audiovisual deepfake detection, focusing on the joint analysis of audio and visual streams to improve detection accuracy. It also discusses publicly available datasets and highlights challenges and future research directions in this field.

Abstract

Deep Learning has been successfully applied in diverse fields, and its impact on deepfake detection is no exception. Deepfakes are fake yet realistic synthetic content that can be used deceitfully for political impersonation, phishing, slandering, or spreading misinformation. Despite extensive research on unimodal deepfake detection, identifying complex deepfakes through joint analysis of audio and visual streams remains relatively unexplored. To fill this gap, this survey first provides an overview of audiovisual deepfake generation techniques, applications, and their consequences, and then provides a comprehensive review of state-of-the-art methods that combine audio and visual modalities to enhance detection accuracy, summarizing and critically analyzing their strengths and limitations. Furthermore, we discuss existing open source datasets for a deeper understanding, which can contribute to the research community and provide necessary information to beginners who want to analyze deep learning-based audiovisual methods for video forensics. By bridging the gap between unimodal and multimodal approaches, this paper aims to improve the effectiveness of deepfake detection strategies and guide future research in cybersecurity and media integrity.


Key findings
The survey reveals a growing interest in audiovisual deepfake detection, with a significant increase in publications in recent years. Multimodal approaches combining audio and visual features generally outperform unimodal methods. However, challenges remain in generalization, scalability, and ethical considerations related to data privacy.
Approach
The paper provides a survey of existing audiovisual deepfake detection methods, categorizing them into synchronization-based, feature fusion, ensemble, and temporal analysis-based approaches. It analyzes their strengths and limitations and discusses the use of publicly available datasets for training and evaluation.
Datasets
DFDC (DeepFake Detection Challenge), FakeAVCeleb, LAV-DF (Localized Audio Visual DeepFake), AV-Deepfake1M, AV-PolyGlotFake
Model(s)
Various deep learning models/architectures are mentioned throughout the paper as used in the reviewed works, including CNNs, RNNs, Transformers, GANs, and VAEs. Specific model names are mentioned in the context of individual papers reviewed, but no single model is the central contribution of this survey paper itself.
Author countries
Taiwan