Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection
Authors: Cai Yu, Shan Jia, Xiaomeng Fu, Jin Liu, Jiahe Tian, Jiao Dai, Xi Wang, Siwei Lyu, Jizhong Han
Published: 2024-04-30 00:25:44+00:00
AI Summary
This paper proposes a novel deepfake detection method that explicitly learns cross-modal correlations between audio and video content to improve generalizability across various deepfake generation techniques. It introduces a correlation distillation task using ASR and VSR models as teacher models and a new benchmark dataset, CMDFD, containing diverse cross-modal deepfakes.
Abstract
With the rising prevalence of deepfakes, there is a growing interest in developing generalizable detection methods for various types of deepfakes. While effective in their specific modalities, traditional detection methods fall short in addressing the generalizability of detection across diverse cross-modal deepfakes. This paper aims to explicitly learn potential cross-modal correlation to enhance deepfake detection towards various generation scenarios. Our approach introduces a correlation distillation task, which models the inherent cross-modal correlation based on content information. This strategy helps to prevent the model from overfitting merely to audio-visual synchronization. Additionally, we present the Cross-Modal Deepfake Dataset (CMDFD), a comprehensive dataset with four generation methods to evaluate the detection of diverse cross-modal deepfakes. The experimental results on CMDFD and FakeAVCeleb datasets demonstrate the superior generalizability of our method over existing state-of-the-art methods. Our code and data can be found at url{https://github.com/ljj898/CMDFD-Dataset-and-Deepfake-Detection}.