Post-training for Deepfake Speech Detection
Authors: Wanying Ge, Xin Wang, Xuechen Liu, Junichi Yamagishi
Published: 2025-06-26 08:34:19+00:00
AI Summary
This paper introduces a post-training approach to adapt self-supervised learning (SSL) models for deepfake speech detection. By training on a large multilingual dataset of genuine and artifacted speech, the resulting AntiDeepfake models outperform existing state-of-the-art detectors, demonstrating strong robustness and generalization to unseen deepfakes.
Abstract
We introduce a post-training approach that adapts self-supervised learning (SSL) models for deepfake speech detection by bridging the gap between general pre-training and domain-specific fine-tuning. We present AntiDeepfake models, a series of post-trained models developed using a large-scale multilingual speech dataset containing over 56,000 hours of genuine speech and 18,000 hours of speech with various artifacts in over one hundred languages. Experimental results show that the post-trained models already exhibit strong robustness and generalization to unseen deepfake speech. When they are further fine-tuned on the Deepfake-Eval-2024 dataset, these models consistently surpass existing state-of-the-art detectors that do not leverage post-training. Model checkpoints and source code are available online.