A Multi-Stream Fusion Approach with One-Class Learning for Audio-Visual Deepfake Detection
Authors: Kyungbok Lee, You Zhang, Zhiyao Duan
Published: 2024-06-20 10:33:15+00:00
AI Summary
This paper proposes a multi-stream fusion approach with one-class learning for audio-visual deepfake detection. The approach improves generalization to unseen deepfake generation methods and offers interpretability by identifying the likely fake modality. Experimental results show significant performance improvements over existing models.
Abstract
This paper addresses the challenge of developing a robust audio-visual deepfake detection model. In practical use cases, new generation algorithms are continually emerging, and these algorithms are not encountered during the development of detection methods. This calls for the generalization ability of the method. Additionally, to ensure the credibility of detection methods, it is beneficial for the model to interpret which cues from the video indicate it is fake. Motivated by these considerations, we then propose a multi-stream fusion approach with one-class learning as a representation-level regularization technique. We study the generalization problem of audio-visual deepfake detection by creating a new benchmark by extending and re-splitting the existing FakeAVCeleb dataset. The benchmark contains four categories of fake videos (Real Audio-Fake Visual, Fake Audio-Fake Visual, Fake Audio-Real Visual, and Unsynchronized videos). The experimental results demonstrate that our approach surpasses the previous models by a large margin. Furthermore, our proposed framework offers interpretability, indicating which modality the model identifies as more likely to be fake. The source code is released at https://github.com/bok-bok/MSOC.