Enkidu: Universal Frequential Perturbation for Real-Time Audio Privacy Protection against Voice Deepfakes

Authors: Zhou Feng, Jiahao Chen, Chunyi Zhou, Yuwen Pu, Qingming Li, Tianyu Du, Shouling Ji

Published: 2025-07-17 09:12:36+00:00

AI Summary

Enkidu is a novel user-oriented audio privacy framework that uses universal frequential perturbations (UFPs) generated via black-box knowledge and few-shot training to protect against voice deepfakes. These UFPs enable real-time, lightweight protection with strong generalization across variable-length audio while preserving audio quality and achieving significantly higher processing efficiency than existing countermeasures.

Abstract

The rapid advancement of voice deepfake technologies has raised serious concerns about user audio privacy, as attackers increasingly exploit publicly available voice data to generate convincing fake audio for malicious purposes such as identity theft, financial fraud, and misinformation campaigns. While existing defense methods offer partial protection, they face critical limitations, including weak adaptability to unseen user data, poor scalability to long audio, rigid reliance on white-box knowledge, and high computational and temporal costs during the encryption process. To address these challenges and defend against personalized voice deepfake threats, we propose Enkidu, a novel user-oriented privacy-preserving framework that leverages universal frequential perturbations generated through black-box knowledge and few-shot training on a small amount of user data. These highly malleable frequency-domain noise patches enable real-time, lightweight protection with strong generalization across variable-length audio and robust resistance to voice deepfake attacks, all while preserving perceptual quality and speech intelligibility. Notably, Enkidu achieves over 50 to 200 times processing memory efficiency (as low as 0.004 gigabytes) and 3 to 7000 times runtime efficiency (real-time coefficient as low as 0.004) compared to six state-of-the-art countermeasures. Extensive experiments across six mainstream text-to-speech models and five cutting-edge automated speaker verification models demonstrate the effectiveness, transferability, and practicality of Enkidu in defending against both vanilla and adaptive voice deepfake attacks.


Key findings
Enkidu achieves high privacy protection rates (SPR and DPR) across various ASV and TTS models while maintaining excellent audio quality (MOS and STOI) and perfect intelligibility. The method is highly efficient, enabling real-time deployment with significantly lower memory and runtime costs than existing countermeasures. It also demonstrates robustness against adaptive attacks.
Approach
Enkidu generates universal frequential perturbations (UFPs) optimized through few-shot training on a small amount of user data. These UFPs are applied to audio in the frequency domain via a lightweight tiling module, disrupting speaker embeddings extracted by deepfake systems while maintaining perceptual quality.
Datasets
LibriSpeech (English), CommonVoice (French), AISHELL (Chinese)
Model(s)
Five cutting-edge Automated Speaker Verification (ASV) models (ECAPA-TDNN, X-Vector, ResNet, ERes2Net, Cam++) and six mainstream Text-to-Speech (TTS) models (Speedy-Speech, FastPitch, YourTTS, Glow-TTS, Tacotron2-DDC, Tacotron2-DCA)
Author countries
China