Mitigating Unauthorized Speech Synthesis for Voice Protection

Authors: Zhisheng Zhang, Qianyi Yang, Derui Wang, Pengyang Huang, Yuxin Cao, Kai Ye, Jie Hao

Published: 2024-10-28 05:16:37+00:00

AI Summary

This paper proposes Pivotal Objective Perturbation (POP), a proactive audio protection technology that adds imperceptible noise to speech samples to prevent high-quality deepfake audio generation. POP's effectiveness and transferability across various state-of-the-art text-to-speech (TTS) models are demonstrated through extensive experiments, significantly increasing speech unclarity scores.

Abstract

With just a few speech samples, it is possible to perfectly replicate a speaker's voice in recent years, while malicious voice exploitation (e.g., telecom fraud for illegal financial gain) has brought huge hazards in our daily lives. Therefore, it is crucial to protect publicly accessible speech data that contains sensitive information, such as personal voiceprints. Most previous defense methods have focused on spoofing speaker verification systems in timbre similarity but the synthesized deepfake speech is still of high quality. In response to the rising hazards, we devise an effective, transferable, and robust proactive protection technology named Pivotal Objective Perturbation (POP) that applies imperceptible error-minimizing noises on original speech samples to prevent them from being effectively learned for text-to-speech (TTS) synthesis models so that high-quality deepfake speeches cannot be generated. We conduct extensive experiments on state-of-the-art (SOTA) TTS models utilizing objective and subjective metrics to comprehensively evaluate our proposed method. The experimental results demonstrate outstanding effectiveness and transferability across various models. Compared to the speech unclarity score of 21.94% from voice synthesizers trained on samples without protection, POP-protected samples significantly increase it to 127.31%. Moreover, our method shows robustness against noise reduction and data augmentation techniques, thereby greatly reducing potential hazards.


Key findings
POP significantly increased speech unclarity scores compared to unprotected samples. The method showed high transferability across different TTS models and robustness against noise reduction and data augmentation techniques. Subjective evaluations confirmed the ineffectiveness of deepfake generation from POP-protected audio.
Approach
POP adds imperceptible, error-minimizing noise to original speech samples. This noise is strategically designed to prevent effective learning by TTS models, thus hindering the generation of high-quality deepfakes. The noise is optimized using a pivotal objective function focused on reconstruction loss.
Datasets
LibriTTS and CMU ARCTIC datasets.
Model(s)
GlowTTS, VITS, and MB-iSTFT-VITS.
Author countries
China, Australia, Singapore, Hong Kong