SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection
Authors: Yi Zhu, Surya Koppisetti, Trang Tran, Gaurav Bharaj
Published: 2024-07-26 05:23:41+00:00
AI Summary
This paper introduces SLIM, a novel audio deepfake detection model that leverages the style-linguistics mismatch in fake speech. SLIM uses self-supervised pretraining on real speech to learn style-linguistics dependencies, then uses these features with standard acoustic features to classify real and fake audio, outperforming benchmarks on out-of-domain datasets.
Abstract
Audio deepfake detection (ADD) is crucial to combat the misuse of speech synthesized from generative AI models. Existing ADD models suffer from generalization issues, with a large performance discrepancy between in-domain and out-of-domain data. Moreover, the black-box nature of existing models limits their use in real-world scenarios, where explanations are required for model decisions. To alleviate these issues, we introduce a new ADD model that explicitly uses the StyleLInguistics Mismatch (SLIM) in fake speech to separate them from real speech. SLIM first employs self-supervised pretraining on only real samples to learn the style-linguistics dependency in the real class. The learned features are then used in complement with standard pretrained acoustic features (e.g., Wav2vec) to learn a classifier on the real and fake classes. When the feature encoders are frozen, SLIM outperforms benchmark methods on out-of-domain datasets while achieving competitive results on in-domain data. The features learned by SLIM allow us to quantify the (mis)match between style and linguistic content in a sample, hence facilitating an explanation of the model decision.