Evolving from Single-modal to Multi-modal Facial Deepfake Detection: Progress and Challenges

Authors: Ping Liu, Qiqi Tao, Joey Tianyi Zhou

Published: 2024-06-11 05:48:04+00:00

AI Summary

This survey paper provides a comprehensive overview of facial deepfake detection methods, tracing their evolution from single-modal to sophisticated multi-modal approaches. It offers a structured taxonomy of detection techniques and analyzes the challenges posed by increasingly realistic deepfakes generated by diffusion models.

Abstract

As synthetic media, including video, audio, and text, become increasingly indistinguishable from real content, the risks of misinformation, identity fraud, and social manipulation escalate. This survey traces the evolution of deepfake detection from early single-modal methods to sophisticated multi-modal approaches that integrate audio-visual and text-visual cues. We present a structured taxonomy of detection techniques and analyze the transition from GAN-based to diffusion model-driven deepfakes, which introduce new challenges due to their heightened realism and robustness against detection. Unlike prior surveys that primarily focus on single-modal detection or earlier deepfake techniques, this work provides the most comprehensive study to date, encompassing the latest advancements in multi-modal deepfake detection, generalization challenges, proactive defense mechanisms, and emerging datasets specifically designed to support new interpretability and reasoning tasks. We further explore the role of Vision-Language Models (VLMs) and Multimodal Large Language Models (MLLMs) in strengthening detection robustness against increasingly sophisticated deepfake attacks. By systematically categorizing existing methods and identifying emerging research directions, this survey serves as a foundation for future advancements in combating AI-generated facial forgeries. A curated list of all related papers can be found at href{https://github.com/qiqitao77/Comprehensive-Advances-in-Deepfake-Detection-Spanning-Diverse-Modalities}{https://github.com/qiqitao77/Awesome-Comprehensive-Deepfake-Detection}.


Key findings
The survey highlights the increasing sophistication of deepfakes and the need for robust, generalizable detection methods. Multi-modal approaches are shown to be crucial for detecting increasingly realistic forgeries. The use of large language and vision-language models is identified as a promising direction for future research.
Approach
The paper surveys existing deepfake detection methods, categorizing them based on modality (single-modal vs. multi-modal), approach (passive vs. proactive), and underlying techniques (e.g., artifact analysis, consistency checks, transformer models). It also discusses the use of vision-language and multimodal large language models for improved detection and interpretability.
Datasets
FaceForensics++, DFD, DFDC, Celeb-DF, DeeperForensics-1.0, WildDeepfake, KoDF, FFIW10K, ForgeryNet, DF-Platter, DeepFakeFace, DiffusionFace, DiFF, VLF, MMTT, FakeAVCeleb, LAV-DF, AV-Deepfake1M, PolyGlotFake, DeepFake-eval-2024, DGM4, and others.
Model(s)
MesoNet, XceptionNet, Gram-Net, various transformer models, CLIP, GPT-4, and other vision-language and multimodal large language models are mentioned as being used in related work.
Author countries
USA, Singapore