A SUPERB-Style Benchmark of Self-Supervised Speech Models for Audio Deepfake Detection
Authors: Hashim Ali, Nithin Sai Adupa, Surya Subramani, Hafiz Malik
Published: 2026-03-02 05:45:55+00:00
Comment: Accepted at ICASSP
AI Summary
This paper introduces Spoof-SUPERB, a new benchmark for audio deepfake detection that systematically evaluates 20 self-supervised learning (SSL) models across various architectures and pretraining objectives. The benchmark assesses performance on multiple in-domain and out-of-domain datasets, including robustness under acoustic degradations. Results show that large-scale discriminative models like XLS-R, UniSpeech-SAT, and WavLM Large consistently achieve superior performance and resilience, benefiting from multilingual pretraining and speaker-aware objectives.
Abstract
Self-supervised learning (SSL) has transformed speech processing, with benchmarks such as SUPERB establishing fair comparisons across diverse downstream tasks. Despite it's security-critical importance, Audio deepfake detection has remained outside these efforts. In this work, we introduce Spoof-SUPERB, a benchmark for audio deepfake detection that systematically evaluates 20 SSL models spanning generative, discriminative, and spectrogram-based architectures. We evaluated these models on multiple in-domain and out-of-domain datasets. Our results reveal that large-scale discriminative models such as XLS-R, UniSpeech-SAT, and WavLM Large consistently outperform other models, benefiting from multilingual pretraining, speaker-aware objectives, and model scale. We further analyze the robustness of these models under acoustic degradations, showing that generative approaches degrade sharply, while discriminative models remain resilient. This benchmark establishes a reproducible baseline and provides practical insights into which SSL representations are most reliable for securing speech systems against audio deepfakes.