Robust AI-Synthesized Speech Detection Using Feature Decomposition Learning and Synthesizer Feature Augmentation

Authors: Kuiyuan Zhang, Zhongyun Hua, Yushu Zhang, Yifang Guo, Tao Xiang

Published: 2024-11-14 03:57:21+00:00

AI Summary

This paper presents a robust deepfake speech detection method using dual-stream feature decomposition learning to separate synthesizer-independent content features from synthesizer-specific features. A synthesizer feature augmentation strategy further enhances robustness by blending and shuffling features, improving performance across various synthesizers and datasets.

Abstract

AI-synthesized speech, also known as deepfake speech, has recently raised significant concerns due to the rapid advancement of speech synthesis and speech conversion techniques. Previous works often rely on distinguishing synthesizer artifacts to identify deepfake speech. However, excessive reliance on these specific synthesizer artifacts may result in unsatisfactory performance when addressing speech signals created by unseen synthesizers. In this paper, we propose a robust deepfake speech detection method that employs feature decomposition to learn synthesizer-independent content features as complementary for detection. Specifically, we propose a dual-stream feature decomposition learning strategy that decomposes the learned speech representation using a synthesizer stream and a content stream. The synthesizer stream specializes in learning synthesizer features through supervised training with synthesizer labels. Meanwhile, the content stream focuses on learning synthesizer-independent content features, enabled by a pseudo-labeling-based supervised learning method. This method randomly transforms speech to generate speed and compression labels for training. Additionally, we employ an adversarial learning technique to reduce the synthesizer-related components in the content stream. The final classification is determined by concatenating the synthesizer and content features. To enhance the model's robustness to different synthesizer characteristics, we further propose a synthesizer feature augmentation strategy that randomly blends the characteristic styles within real and fake audio features and randomly shuffles the synthesizer features with the content features. This strategy effectively enhances the feature diversity and simulates more feature combinations.


Key findings
The proposed method achieves state-of-the-art performance across various evaluation scenarios including cross-method, cross-dataset, and cross-language evaluations. The use of feature decomposition and augmentation significantly improves robustness to unseen synthesizers. Ablation studies confirm the contribution of each component of the proposed approach.
Approach
The approach uses a dual-stream network: one stream learns synthesizer-specific features using supervised learning, while the other learns synthesizer-independent content features via pseudo-labeling (speed and compression transformations) and adversarial learning. The final classification combines both feature streams.
Datasets
WaveFake, LibriSeVoc, DECRO (English and Chinese subsets)
Model(s)
ResNet18 (modified) with dual-stream architecture, including convolutional blocks, average pooling, and linear classifiers. Comparison with other models like AASIST, RawNet2-Voc, SFAT-Net, ASDG, LCNN, RawNet2, RawGAT, Wav2Vec2, WaveLM, Wav2Clip, and AudioClip.
Author countries
China