Lightweight Joint Audio-Visual Deepfake Detection via Single-Stream Multi-Modal Learning Framework

Authors: Kuiyuan Zhang, Wenjie Pei, Rushi Lan, Yifang Guo, Zhongyun Hua

Published: 2025-06-09 02:13:04+00:00

AI Summary

This paper proposes SS-AVD, a lightweight audio-visual deepfake detection network using a single-stream multi-modal learning framework. It integrates audio and visual features iteratively within a single stream, avoiding the redundancy of separate sub-models and achieving superior performance with significantly fewer parameters.

Abstract

Deepfakes are AI-synthesized multimedia data that may be abused for spreading misinformation. Deepfake generation involves both visual and audio manipulation. To detect audio-visual deepfakes, previous studies commonly employ two relatively independent sub-models to learn audio and visual features, respectively, and fuse them subsequently for deepfake detection. However, this may underutilize the inherent correlations between audio and visual features. Moreover, utilizing two isolated feature learning sub-models can result in redundant neural layers, making the overall model inefficient and impractical for resource-constrained environments. In this work, we design a lightweight network for audio-visual deepfake detection via a single-stream multi-modal learning framework. Specifically, we introduce a collaborative audio-visual learning block to efficiently integrate multi-modal information while learning the visual and audio features. By iteratively employing this block, our single-stream network achieves a continuous fusion of multi-modal features across its layers. Thus, our network efficiently captures visual and audio features without the need for excessive block stacking, resulting in a lightweight network design. Furthermore, we propose a multi-modal classification module that can boost the dependence of the visual and audio classifiers on modality content. It also enhances the whole resistance of the video classifier against the mismatches between audio and visual modalities. We conduct experiments on the DF-TIMIT, FakeAVCeleb, and DFDC benchmark datasets. Compared to state-of-the-art audio-visual joint detection methods, our method is significantly lightweight with only 0.48M parameters, yet it achieves superiority in both uni-modal and multi-modal deepfakes, as well as in unseen types of deepfakes.


Key findings
SS-AVD significantly outperforms state-of-the-art methods on uni-modal and multi-modal deepfake detection across multiple datasets. Despite its lightweight design (0.48M parameters), it achieves superior accuracy and AUC scores. The model demonstrates robustness to unseen deepfake types.
Approach
SS-AVD uses a collaborative audio-visual learning (CAVL) block to integrate audio and visual features iteratively throughout the network's layers. A multi-modal classification module, incorporating style-shuffle and latent-shuffle augmentations, enhances robustness and improves detection accuracy for both uni-modal and multi-modal deepfakes.
Datasets
DF-TIMIT, FakeAVCeleb, DFDC
Model(s)
A custom single-stream multi-modal network with collaborative audio-visual learning (CAVL) blocks and a multi-modal classification module.
Author countries
China