Investigating self-supervised representations for audio-visual deepfake detection

Authors: Dragos-Alexandru Boldisor, Stefan Smeu, Dan Oneata, Elisabeta Oneata

Published: 2025-11-21 12:04:00+00:00

AI Summary

This paper systematically evaluates various self-supervised representations for audio-visual deepfake detection across modalities (audio, video, multimodal) and domains (lip movements, generic visual content). It assesses their detection effectiveness, interpretability, and cross-modal complementarity, finding that most capture deepfake-relevant and complementary information, with models attending to semantically meaningful regions. However, the study concludes that none of the features generalize reliably across datasets, indicating a fundamental challenge in achieving robust cross-domain performance due to dataset characteristics.

Abstract

Self-supervised representations excel at many vision and speech tasks, but their potential for audio-visual deepfake detection remains underexplored. Unlike prior work that uses these features in isolation or buried within complex architectures, we systematically evaluate them across modalities (audio, video, multimodal) and domains (lip movements, generic visual content). We assess three key dimensions: detection effectiveness, interpretability of encoded information, and cross-modal complementarity. We find that most self-supervised features capture deepfake-relevant information, and that this information is complementary. Moreover, models primarily attend to semantically meaningful regions rather than spurious artifacts. Yet none generalize reliably across datasets. This generalization failure likely stems from dataset characteristics, not from the features themselves latching onto superficial patterns. These results expose both the promise and fundamental challenges of self-supervised representations for deepfake detection: while they learn meaningful patterns, achieving robust cross-domain performance remains elusive.


Key findings
Most self-supervised features, across audio, visual, and multimodal types, achieve strong in-domain deepfake detection performance and capture complementary, semantically meaningful information. Audio-informed representations (e.g., Wav2Vec2, AV-HuBERT) demonstrate the highest transferability, especially for speech-level manipulations. Despite these strengths, all evaluated features consistently fail to generalize reliably across different deepfake datasets, indicating that generalization failure is a fundamental challenge linked to dataset characteristics rather than the features themselves.
Approach
The authors systematically evaluate self-supervised features using linear probing for supervised deepfake detection, anomaly detection tasks (next-token prediction and audio-video synchronization) trained only on real data, and explainability techniques to understand model focus. They extract locally temporal features, apply a minimal learnable linear classifier, and aggregate predictions using a log-sum-exp pooling function.
Datasets
FakeAVCeleb (FAVC), AV-Deepfake1M (AV1M), DeepfakeEval 2024 (DFE-2024), AVLips
Model(s)
Wav2Vec XLS-R 2B, Auto-AVSR (ASR), AV-HuBERT (A), Auto-AVSR (VSR), FSFM, Video-MAE-large, CLIP ViT-L/14, AV-HuBERT (V), Auto-AVSR, AV-HuBERT, AVFF, SpeechForensics
Author countries
Romania