A Unified Framework for Modality-Agnostic Deepfakes Detection

Authors: Cai Yu, Peng Chen, Jiahe Tian, Jin Liu, Jiao Dai, Xi Wang, Yesheng Chai, Shan Jia, Siwei Lyu, Jizhong Han

Published: 2023-07-26 20:30:34+00:00

AI Summary

This paper introduces a modality-agnostic framework for audio-visual deepfake detection that handles missing modalities and various forgery types. It leverages audio-visual speech recognition (AVSR) to extract speech correlations and employs a dual-label detection approach for independent modality authentication.

Abstract

As AI-generated content (AIGC) thrives, deepfakes have expanded from single-modality falsification to cross-modal fake content creation, where either audio or visual components can be manipulated. While using two unimodal detectors can detect audio-visual deepfakes, cross-modal forgery clues could be overlooked. Existing multimodal deepfake detection methods typically establish correspondence between the audio and visual modalities for binary real/fake classification, and require the co-occurrence of both modalities. However, in real-world multi-modal applications, missing modality scenarios may occur where either modality is unavailable. In such cases, audio-visual detection methods are less practical than two independent unimodal methods. Consequently, the detector can not always obtain the number or type of manipulated modalities beforehand, necessitating a fake-modality-agnostic audio-visual detector. In this work, we introduce a comprehensive framework that is agnostic to fake modalities, which facilitates the identification of multimodal deepfakes and handles situations with missing modalities, regardless of the manipulations embedded in audio, video, or even cross-modal forms. To enhance the modeling of cross-modal forgery clues, we employ audio-visual speech recognition (AVSR) as a preliminary task. This efficiently extracts speech correlations across modalities, a feature challenging for deepfakes to replicate. Additionally, we propose a dual-label detection approach that follows the structure of AVSR to support the independent detection of each modality. Extensive experiments on three audio-visual datasets show that our scheme outperforms state-of-the-art detection methods with promising performance on modality-agnostic audio/video deepfakes.


Key findings
The proposed framework outperforms state-of-the-art methods on three datasets, demonstrating superior performance in modality-agnostic scenarios. It effectively detects deepfakes regardless of whether audio, video, or both modalities are manipulated or missing. The use of AVSR significantly improves performance.
Approach
The approach uses a two-stage framework. Stage 1 pre-trains an audio-visual speech recognition model to capture cross-modal speech correlations. Stage 2 uses a dual-label classifier with a modality compensation adapter and temporal aggregation module to detect fake audio and/or video independently.
Datasets
DFDC, FakeAVCeleb, LAV-DF
Model(s)
Encoder-decoder architecture (pretrained on AVSR), Modality Compensation Adapter, Dual-Label Classifier (with Fake Composition Detector and Temporal Aggregation Module), Spatio-Temporal ResNet, 1D Convolutional Layer
Author countries
China, USA