XMAD-Bench: Cross-Domain Multilingual Audio Deepfake Benchmark

Authors: Ioan-Paul Ciobanu, Andrei-Iulian Hiji, Nicolae-Catalin Ristea, Paul Irofti, Cristian Rusu, Radu Tudor Ionescu

Published: 2025-05-31 08:28:36+00:00

AI Summary

This paper introduces XMAD-Bench, a large-scale cross-domain multilingual audio deepfake benchmark comprising 668.8 hours of real and deepfake speech across seven languages. Experiments reveal a significant disparity between in-domain and cross-domain performance of state-of-the-art deepfake detectors, highlighting the need for more robust models.

Abstract

Recent advances in audio generation led to an increasing number of deepfakes, making the general public more vulnerable to financial scams, identity theft, and misinformation. Audio deepfake detectors promise to alleviate this issue, with many recent studies reporting accuracy rates close to 99%. However, these methods are typically tested in an in-domain setup, where the deepfake samples from the training and test sets are produced by the same generative models. To this end, we introduce XMAD-Bench, a large-scale cross-domain multilingual audio deepfake benchmark comprising 668.8 hours of real and deepfake speech. In our novel dataset, the speakers, the generative methods, and the real audio sources are distinct across training and test splits. This leads to a challenging cross-domain evaluation setup, where audio deepfake detectors can be tested ``in the wild''. Our in-domain and cross-domain experiments indicate a clear disparity between the in-domain performance of deepfake detectors, which is usually as high as 100%, and the cross-domain performance of the same models, which is sometimes similar to random chance. Our benchmark highlights the need for the development of robust audio deepfake detectors, which maintain their generalization capacity across different languages, speakers, generative methods, and data sources. Our benchmark is publicly released at https://github.com/ristea/xmad-bench/.


Key findings
State-of-the-art models achieve near-perfect accuracy in in-domain settings but show significantly reduced performance in cross-domain evaluations. wav2vec 2.0 demonstrated relatively better generalization capabilities compared to other models across languages and generative methods. The results underscore the limitations of current audio deepfake detectors in real-world scenarios.
Approach
The authors created XMAD-Bench, a new benchmark dataset with diverse speakers, generative methods, and data sources across training and testing sets to evaluate the cross-domain generalization ability of audio deepfake detectors. They then evaluated several state-of-the-art models (ResNet-18, ResNet-50, AST, SepTr, wav2vec 2.0) on this benchmark.
Datasets
XMAD-Bench (created by the authors), Common Voice, MASC, M-AILABS, AISHELL-3, VoxPopuli
Model(s)
ResNet-18, ResNet-50, AST, SepTr, wav2vec 2.0
Author countries
Romania