What You Read Isn't What You Hear: Linguistic Sensitivity in Deepfake Speech Detection

Authors: Binh Nguyen, Shuji Shi, Ryan Ofman, Thai Le

Published: 2025-05-23 06:06:37+00:00

AI Summary

This paper investigates the linguistic sensitivity of audio anti-spoofing detectors by introducing transcript-level adversarial attacks. The study reveals that minor linguistic perturbations can significantly reduce detection accuracy, highlighting the need for more robust systems that account for linguistic variations.

Abstract

Recent advances in text-to-speech technologies have enabled realistic voice generation, fueling audio-based deepfake attacks such as fraud and impersonation. While audio anti-spoofing systems are critical for detecting such threats, prior work has predominantly focused on acoustic-level perturbations, leaving the impact of linguistic variation largely unexplored. In this paper, we investigate the linguistic sensitivity of both open-source and commercial anti-spoofing detectors by introducing transcript-level adversarial attacks. Our extensive evaluation reveals that even minor linguistic perturbations can significantly degrade detection accuracy: attack success rates surpass 60% on several open-source detector-voice pairs, and notably one commercial detection accuracy drops from 100% on synthetic audio to just 32%. Through a comprehensive feature attribution analysis, we identify that both linguistic complexity and model-level audio embedding similarity contribute strongly to detector vulnerability. We further demonstrate the real-world risk via a case study replicating the Brad Pitt audio deepfake scam, using transcript adversarial attacks to completely bypass commercial detectors. These results highlight the need to move beyond purely acoustic defenses and account for linguistic variation in the design of robust anti-spoofing systems. All source code will be publicly available.


Key findings
Attack success rates exceeded 60% on several open-source detector-voice pairs, and one commercial detector's accuracy dropped from 100% to 32%. Both linguistic complexity and audio embedding similarity contribute to detector vulnerability. A case study replicated the Brad Pitt deepfake scam, successfully bypassing commercial detectors using transcript adversarial attacks.
Approach
The researchers introduce transcript-level adversarial attacks by subtly perturbing transcripts before TTS synthesis. They evaluate the impact on detection accuracy using several open-source and commercial anti-spoofing detectors and perform feature attribution analysis to understand detector vulnerabilities.
Datasets
VoiceWukong dataset (based on VCTK dataset)
Model(s)
AASIST-2, CLAD, RawNet-2 (open-source); API-A, API-B (commercial); Kokoro TTS, Coqui TTS, F5 TTS, OpenAI TTS
Author countries
Vietnam, USA