A Multi-Stream Fusion Approach with One-Class Learning for Audio-Visual Deepfake Detection

Authors: Kyungbok Lee, You Zhang, Zhiyao Duan

Published: 2024-06-20 10:33:15+00:00

AI Summary

This paper proposes a multi-stream fusion approach with one-class learning for audio-visual deepfake detection. The approach improves generalization to unseen deepfake generation methods and offers interpretability by identifying the likely fake modality. Experimental results show significant performance improvements over existing models.

Abstract

This paper addresses the challenge of developing a robust audio-visual deepfake detection model. In practical use cases, new generation algorithms are continually emerging, and these algorithms are not encountered during the development of detection methods. This calls for the generalization ability of the method. Additionally, to ensure the credibility of detection methods, it is beneficial for the model to interpret which cues from the video indicate it is fake. Motivated by these considerations, we then propose a multi-stream fusion approach with one-class learning as a representation-level regularization technique. We study the generalization problem of audio-visual deepfake detection by creating a new benchmark by extending and re-splitting the existing FakeAVCeleb dataset. The benchmark contains four categories of fake videos (Real Audio-Fake Visual, Fake Audio-Fake Visual, Fake Audio-Real Visual, and Unsynchronized videos). The experimental results demonstrate that our approach surpasses the previous models by a large margin. Furthermore, our proposed framework offers interpretability, indicating which modality the model identifies as more likely to be fake. The source code is released at https://github.com/bok-bok/MSOC.


Key findings
The proposed MSOC model significantly outperforms state-of-the-art models on unseen deepfake generation methods. The multi-stream architecture with one-class learning improves generalization ability. The model's interpretability allows for identifying the fake modality (audio or video).
Approach
The authors propose a multi-stream architecture with separate audio, visual, and audio-visual branches, each trained using one-class learning to enhance generalization. During inference, the scores from each branch are fused to make a final classification decision. The model's design allows for identifying which modality (audio or video) is likely fake.
Datasets
Extended and re-split FakeAVCeleb dataset, creating four test sets (RAFV, FAFV, FARV, Unsynced) containing unseen deepfake generation methods.
Model(s)
ResNet and SCNet-STIL (with STIL blocks) are used as feature extractors for audio and visual streams, respectively. A feedforward neural network fuses audio and visual features in the audio-visual branch. One-Class Softmax loss is used for training.
Author countries
USA