SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection

Authors: Yi Zhu, Surya Koppisetti, Trang Tran, Gaurav Bharaj

Published: 2024-07-26 05:23:41+00:00

AI Summary

This paper introduces SLIM, a novel audio deepfake detection model that leverages the style-linguistics mismatch in fake speech. SLIM uses self-supervised pretraining on real speech to learn style-linguistics dependencies, then uses these features with standard acoustic features to classify real and fake audio, outperforming benchmarks on out-of-domain datasets.

Abstract

Audio deepfake detection (ADD) is crucial to combat the misuse of speech synthesized from generative AI models. Existing ADD models suffer from generalization issues, with a large performance discrepancy between in-domain and out-of-domain data. Moreover, the black-box nature of existing models limits their use in real-world scenarios, where explanations are required for model decisions. To alleviate these issues, we introduce a new ADD model that explicitly uses the StyleLInguistics Mismatch (SLIM) in fake speech to separate them from real speech. SLIM first employs self-supervised pretraining on only real samples to learn the style-linguistics dependency in the real class. The learned features are then used in complement with standard pretrained acoustic features (e.g., Wav2vec) to learn a classifier on the real and fake classes. When the feature encoders are frozen, SLIM outperforms benchmark methods on out-of-domain datasets while achieving competitive results on in-domain data. The features learned by SLIM allow us to quantify the (mis)match between style and linguistic content in a sample, hence facilitating an explanation of the model decision.


Key findings
SLIM outperforms state-of-the-art methods on out-of-domain datasets (In-the-wild, MLAAD-EN) without finetuning the frontend, achieving competitive results on in-domain datasets. The style-linguistics mismatch is shown to be a useful feature for generalizing audio deepfake detection, and SLIM's design facilitates interpretation of model decisions.
Approach
SLIM uses a two-stage approach. Stage 1 employs self-supervised learning on real audio to learn style-linguistics dependencies. Stage 2 uses these learned features, along with pretrained acoustic features, in a supervised classifier to distinguish real and fake audio.
Datasets
Common Voice, RAVDESS, ASVspoof2019, ASVspoof2021, In-the-wild, MLAAD-EN
Model(s)
Wav2vec-XLSR (fine-tuned for speech emotion recognition and automatic speech recognition), ECAPA-TDNN, and a custom classifier with ASP and MLP layers.
Author countries
UNKNOWN