Towards generalisable and calibrated synthetic speech detection with self-supervised representations

Authors: Octavian Pascu, Adriana Stan, Dan Oneata, Elisabeta Oneata, Horia Cucu

Published: 2023-09-11 11:11:28+00:00

AI Summary

This paper proposes using pretrained self-supervised representations (specifically, wav2vec 2.0 variants) with a simple logistic regression classifier for audio deepfake detection. This approach significantly improves generalization capabilities and calibration compared to existing methods, reducing the equal error rate from 30.9% to 8.8% on a benchmark of eight deepfake datasets.

Abstract

Generalisation -- the ability of a model to perform well on unseen data -- is crucial for building reliable deepfake detectors. However, recent studies have shown that the current audio deepfake models fall short of this desideratum. In this work we investigate the potential of pretrained self-supervised representations in building general and calibrated audio deepfake detection models. We show that large frozen representations coupled with a simple logistic regression classifier are extremely effective in achieving strong generalisation capabilities: compared to the RawNet2 model, this approach reduces the equal error rate from 30.9% to 8.8% on a benchmark of eight deepfake datasets, while learning less than 2k parameters. Moreover, the proposed method produces considerably more reliable predictions compared to previous approaches making it more suitable for realistic use.


Key findings
The proposed method significantly outperforms RawNet2 and many state-of-the-art methods in terms of equal error rate (EER) on a benchmark of eight diverse datasets. It also shows improved calibration and reliability estimation compared to previous approaches. The largest wav2vec 2.0 model and a less regularized logistic regression yielded the best performance.
Approach
The authors extract features using pre-trained self-supervised models (wav2vec 2.0 variants), freezing their weights. A simple logistic regression classifier is then trained on these frozen representations to distinguish between real and fake audio. The classifier's output probabilities are used for uncertainty estimation.
Datasets
ASVspoof'19, In-the-Wild (ITW), TIMIT-TTS (TIM), FakeOrReal (FoR), PartialSpoof (PS), ODSS, MLAAD. The authors also used an augmented version of TIMIT-TTS (TIM*).
Model(s)
Various wav2vec 2.0 models (XLS-R and WavLM variants with different sizes and training data), Logistic Regression, Multilayer Perceptron (MLP), Self-Attention Layer (SAL).
Author countries
Romania