Zero-Shot Fake Video Detection by Audio-Visual Consistency

Authors: Xiaolou Li, Zehua Liu, Chen Chen, Lantian Li, Li Guo, Dong Wang

Published: 2024-06-12 04:06:56+00:00

AI Summary

This paper proposes a zero-shot deepfake video detection approach based on audio-visual content consistency. It uses pre-trained ASR and VSR models to extract content sequences from audio and video, then computes the edit distance between them to determine authenticity. This approach outperforms existing methods in generalizability and robustness.

Abstract

Recent studies have advocated the detection of fake videos as a one-class detection task, predicated on the hypothesis that the consistency between audio and visual modalities of genuine data is more significant than that of fake data. This methodology, which solely relies on genuine audio-visual data while negating the need for forged counterparts, is thus delineated as a `zero-shot' detection paradigm. This paper introduces a novel zero-shot detection approach anchored in content consistency across audio and video. By employing pre-trained ASR and VSR models, we recognize the audio and video content sequences, respectively. Then, the edit distance between the two sequences is computed to assess whether the claimed video is genuine. Experimental results indicate that, compared to two mainstream approaches based on semantic consistency and temporal consistency, our approach achieves superior generalizability across various deepfake techniques and demonstrates strong robustness against audio-visual perturbations. Finally, state-of-the-art performance gains can be achieved by simply integrating the decision scores of these three systems.


Key findings
The proposed content consistency method shows superior generalizability across various deepfake techniques compared to semantic and temporal consistency methods. It also demonstrates strong robustness against audio-visual perturbations. Fusion of all three consistency methods achieves state-of-the-art performance.
Approach
The method leverages pre-trained ASR and VSR models to obtain content sequences from audio and video streams. The edit distance between these sequences is calculated as a measure of audio-visual consistency, with a lower distance indicating a higher probability of authenticity. This score, along with scores from semantic and temporal consistency methods, is fused for improved detection.
Datasets
FakeAVCeleb and DeepFakeTIMIT
Model(s)
Pre-trained ASR and VSR models (specific models not explicitly stated, but mentioned as utilizing Auto-AVSR framework which leverages Conformer and ResNet architectures); AV-HuBERT for semantic consistency; VocaLiST for temporal consistency.
Author countries
China