CAD: A General Multimodal Framework for Video Deepfake Detection via Cross-Modal Alignment and Distillation

Authors: Yuxuan Du, Zhendong Wang, Yuhao Luo, Caiyong Piao, Zhiyuan Yan, Hao Li, Li Yuan

Published: 2025-05-21 08:11:07+00:00

AI Summary

The paper proposes CAD, a multimodal deepfake detection framework that integrates modality-specific forensic traces and modality-shared semantic misalignments for improved detection accuracy. CAD uses cross-modal alignment to identify inconsistencies and cross-modal distillation to harmonize features while preserving forensic traces, significantly outperforming previous methods.

Abstract

The rapid emergence of multimodal deepfakes (visual and auditory content are manipulated in concert) undermines the reliability of existing detectors that rely solely on modality-specific artifacts or cross-modal inconsistencies. In this work, we first demonstrate that modality-specific forensic traces (e.g., face-swap artifacts or spectral distortions) and modality-shared semantic misalignments (e.g., lip-speech asynchrony) offer complementary evidence, and that neglecting either aspect limits detection performance. Existing approaches either naively fuse modality-specific features without reconciling their conflicting characteristics or focus predominantly on semantic misalignment at the expense of modality-specific fine-grained artifact cues. To address these shortcomings, we propose a general multimodal framework for video deepfake detection via Cross-Modal Alignment and Distillation (CAD). CAD comprises two core components: 1) Cross-modal alignment that identifies inconsistencies in high-level semantic synchronization (e.g., lip-speech mismatches); 2) Cross-modal distillation that mitigates feature conflicts during fusion while preserving modality-specific forensic traces (e.g., spectral distortions in synthetic audio). Extensive experiments on both multimodal and unimodal (e.g., image-only/video-only)deepfake benchmarks demonstrate that CAD significantly outperforms previous methods, validating the necessity of harmonious integration of multimodal complementary information.


Key findings
CAD significantly outperforms existing unimodal and multimodal deepfake detection methods on various datasets, achieving state-of-the-art performance (99.96% AUC on IDForge). The results validate the importance of integrating both modality-specific and modality-shared information for robust deepfake detection. Ablation studies demonstrate the contribution of both cross-modal alignment and distillation modules.
Approach
CAD uses a dual-path architecture: a cross-modal alignment module that leverages CLIP and Whisper to identify semantic inconsistencies (e.g., lip-speech mismatches) and a cross-modal distillation module (using SimSiam loss) that harmonizes features while preserving modality-specific forensic traces. This approach maximizes mutual information between audio and video modalities.
Datasets
FakeAVCeleb, IDForge-v2, FaceShifter, Celeb-DF
Model(s)
CLIP ViT-Base-16, Whisper-Small, (LoRA used for audio encoder fine-tuning)
Author countries
China