Circumventing shortcuts in audio-visual deepfake detection datasets with unsupervised learning
Authors: Stefan Smeu, Dragos-Alexandru Boldisor, Dan Oneata, Elisabeta Oneata
Published: 2024-11-29 18:58:20+00:00
AI Summary
This paper reveals a previously unknown bias in popular audio-video deepfake datasets: a leading silence in fake videos. This bias allows for near-perfect classification, and the authors propose an unsupervised learning approach, AVH-Align, which trains only on real data to mitigate this bias and improve robustness in deepfake detection.
Abstract
Good datasets are essential for developing and benchmarking any machine learning system. Their importance is even more extreme for safety critical applications such as deepfake detection - the focus of this paper. Here we reveal that two of the most widely used audio-video deepfake datasets suffer from a previously unidentified spurious feature: the leading silence. Fake videos start with a very brief moment of silence and based on this feature alone, we can separate the real and fake samples almost perfectly. As such, previous audio-only and audio-video models exploit the presence of silence in the fake videos and consequently perform worse when the leading silence is removed. To circumvent latching on such unwanted artifact and possibly other unrevealed ones we propose a shift from supervised to unsupervised learning by training models exclusively on real data. We show that by aligning self-supervised audio-video representations we remove the risk of relying on dataset-specific biases and improve robustness in deepfake detection.