Transferable Adversarial Attacks on Audio Deepfake Detection

Authors: Muhammad Umar Farooq, Awais Khan, Kutub Uddin, Khalid Mahmood Malik

Published: 2025-01-21 05:46:47+00:00

AI Summary

This paper introduces a transferable GAN-based adversarial attack framework to evaluate the robustness of state-of-the-art audio deepfake detection (ADD) systems. The framework generates high-quality adversarial attacks that preserve transcription and perceptual integrity, revealing significant vulnerabilities in existing ADD systems.

Abstract

Audio deepfakes pose significant threats, including impersonation, fraud, and reputation damage. To address these risks, audio deepfake detection (ADD) techniques have been developed, demonstrating success on benchmarks like ASVspoof2019. However, their resilience against transferable adversarial attacks remains largely unexplored. In this paper, we introduce a transferable GAN-based adversarial attack framework to evaluate the effectiveness of state-of-the-art (SOTA) ADD systems. By leveraging an ensemble of surrogate ADD models and a discriminator, the proposed approach generates transferable adversarial attacks that better reflect real-world scenarios. Unlike previous methods, the proposed framework incorporates a self-supervised audio model to ensure transcription and perceptual integrity, resulting in high-quality adversarial attacks. Experimental results on benchmark dataset reveal that SOTA ADD systems exhibit significant vulnerabilities, with accuracies dropping from 98% to 26%, 92% to 54%, and 94% to 84% in white-box, gray-box, and black-box scenarios, respectively. When tested in other data sets, performance drops of 91% to 46%, and 94% to 67% were observed against the In-the-Wild and WaveFake data sets, respectively. These results highlight the significant vulnerabilities of existing ADD systems and emphasize the need to enhance their robustness against advanced adversarial threats to ensure security and reliability.


Key findings
State-of-the-art ADD systems show significant vulnerabilities to the proposed attacks, with accuracy dropping drastically across white-box, gray-box, and black-box scenarios. Performance degradation was observed across multiple datasets, highlighting the need for more robust ADD methods.
Approach
The authors propose a transferable GAN-based attack framework. This framework leverages an ensemble of surrogate ADD models and a discriminator to generate transferable adversarial examples. A self-supervised audio model ensures transcription and perceptual integrity of the attacks.
Datasets
ASVspoof2019, WaveFake, In-the-Wild
Model(s)
Res-TSSDNet, Inc-TSSDNet, RawNet2, ResNet, MS-ResNet, Wave2Vec, Transformer-based BERT encoder
Author countries
USA