Does Audio Deepfake Detection Generalize?

Authors: Nicolas M. Müller, Pavel Czempin, Franziska Dieckmann, Adam Froghyar, Konstantin Böttinger

Published: 2022-03-30 12:48:22+00:00

AI Summary

This paper systematically re-implements and evaluates twelve audio deepfake detection architectures from prior work, identifying key factors like feature extraction (cqtspec or logspec outperform melspec) for improved performance. It also introduces a new 'in-the-wild' dataset to assess generalization, revealing significantly degraded performance on real-world data, highlighting limitations in current approaches.

Abstract

Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research. While researchers have presented various techniques for detecting audio spoofs, it is often unclear exactly why these architectures are successful: Preprocessing steps, hyperparameter settings, and the degree of fine-tuning are not consistent across related work. Which factors contribute to success, and which are accidental? In this work, we address this problem: We systematize audio spoofing detection by re-implementing and uniformly evaluating architectures from related work. We identify overarching features for successful audio deepfake detection, such as using cqtspec or logspec features instead of melspec features, which improves performance by 37% EER on average, all other factors constant. Additionally, we evaluate generalization capabilities: We collect and publish a new dataset consisting of 37.9 hours of found audio recordings of celebrities and politicians, of which 17.2 hours are deepfakes. We find that related work performs poorly on such real-world data (performance degradation of up to one thousand percent). This may suggest that the community has tailored its solutions too closely to the prevailing ASVSpoof benchmark and that deepfakes are much harder to detect outside the lab than previously thought.


Key findings
Using cqtspec or logspec features significantly improved detection accuracy compared to melspec. Models trained and evaluated on ASVspoof 2019 performed poorly on the new 'in-the-wild' dataset, with EER increasing up to 1000%, indicating poor generalization to real-world scenarios. The use of full-length audio input also led to better performance than fixed-length inputs.
Approach
The researchers re-implemented twelve existing audio deepfake detection architectures, using a standardized evaluation process. They systematically varied feature extraction methods and input lengths to isolate contributing factors to model performance, also evaluating generalization on a newly collected real-world dataset.
Datasets
ASVspoof 2019 (Logical Access part) and a new 'in-the-wild' dataset of 37.9 hours of audio (17.2 hours deepfakes, 20.7 hours authentic) from celebrities and politicians.
Model(s)
LSTM, LCNN, LCNN-Attention, LCNN-LSTM, MesoNet, MesoInception, ResNet18, Transformer, CRNNSpoof, RawNet2, RawPC, RawGAT-ST
Author countries
Germany