Measuring the Robustness of Audio Deepfake Detectors

Authors: Xiang Li, Pin-Yu Chen, Wenqi Wei

Published: 2025-03-21 23:21:17+00:00

AI Summary

This research systematically evaluates the robustness of 10 audio deepfake detection models against 16 common corruptions (noise, modification, compression). It finds that while models are robust to noise, they are vulnerable to modifications and compression, especially neural codecs; foundation models generally outperform traditional models.

Abstract

Deepfakes have become a universal and rapidly intensifying concern of generative AI across various media types such as images, audio, and videos. Among these, audio deepfakes have been of particular concern due to the ease of high-quality voice synthesis and distribution via platforms such as social media and robocalls. Consequently, detecting audio deepfakes plays a critical role in combating the growing misuse of AI-synthesized speech. However, real-world scenarios often introduce various audio corruptions, such as noise, modification, and compression, that may significantly impact detection performance. This work systematically evaluates the robustness of 10 audio deepfake detection models against 16 common corruptions, categorized into noise perturbation, audio modification, and compression. Using both traditional deep learning models and state-of-the-art foundation models, we make four unique observations. First, our findings show that while most models demonstrate strong robustness to noise, they are notably more vulnerable to modifications and compression, especially when neural codecs are applied. Second, speech foundation models generally outperform traditional models across most scenarios, likely due to their self-supervised learning paradigm and large-scale pre-training. Third, our results show that increasing model size improves robustness, albeit with diminishing returns. Fourth, we demonstrate how targeted data augmentation during training can enhance model resilience to unseen perturbations. A case study on political speech deepfakes highlights the effectiveness of foundation models in achieving high accuracy under real-world conditions. These findings emphasize the importance of developing more robust detection frameworks to ensure reliability in practical deployment settings.


Key findings
Most models showed strong robustness to noise but were vulnerable to modifications and compression; foundation models outperformed traditional models across most scenarios; increasing model size improved robustness with diminishing returns; targeted data augmentation enhanced model resilience.
Approach
The researchers evaluated the robustness of 10 audio deepfake detection models (including traditional and foundation models) by applying 16 common audio corruptions to a dataset. They measured performance using Equal Error Rate (EER), Accuracy, and AUROC, focusing on cases where audio quality remained acceptable (ViSQOL ≥ 3).
Datasets
Wavefake dataset (generated from LJSPEECH), In-the-Wild dataset
Model(s)
LFCC-LCNN, ResNet Spec., RawNet2, AASIST, RawGATST, CLAP, Whisper, Wave2Vec2, HuBERT, Wave2Vec2BERT
Author countries
USA