A robust audio deepfake detection system via multi-view feature

Authors: Yujie Yang, Haochen Qin, Hang Zhou, Chengcheng Wang, Tianyu Guo, Kai Han, Yunhe Wang

Published: 2024-03-04 11:57:32+00:00

AI Summary

This paper improves audio deepfake detection by exploring various audio features (handcrafted and learning-based) and proposes multi-view feature incorporation methods (feature selection and fusion). The model, trained on ASV2019 data, achieves a 24.27% equal error rate on the In-the-Wild dataset, demonstrating improved generalization.

Abstract

With the advancement of generative modeling techniques, synthetic human speech becomes increasingly indistinguishable from real, and tricky challenges are elicited for the audio deepfake detection (ADD) system. In this paper, we exploit audio features to improve the generalizability of ADD systems. Investigation of the ADD task performance is conducted over a broad range of audio features, including various handcrafted features and learning-based features. Experiments show that learning-based audio features pretrained on a large amount of data generalize better than hand-crafted features on out-of-domain scenarios. Subsequently, we further improve the generalizability of the ADD system using proposed multi-feature approaches to incorporate complimentary information from features of different views. The model trained on ASV2019 data achieves an equal error rate of 24.27% on the In-the-Wild dataset.


Key findings
Learning-based features, especially those pretrained on large datasets, generalize better than handcrafted features. The proposed multi-view feature incorporation methods (selection and fusion) significantly improve the model's generalization ability, reducing the equal error rate on the In-the-Wild dataset to 24.27%.
Approach
The authors investigate the performance of various handcrafted and learning-based audio features for deepfake detection. They propose two methods to improve generalization: feature selection, which chooses the best features for each sample, and feature fusion, which combines multiple features using channel attention and a Transformer encoder.
Datasets
ASVspoof 2019 Logical Access (LA) dataset (train and dev subsets), ASVspoof 2019 and 2021 evaluation subsets, In-the-Wild dataset
Model(s)
ResNet18 (classifier); various models used for feature extraction including: Mel, MFCC, LogSpec, LFCC, CQT, LEAF, SincNet, EnCodec, AudioDec, AudioMAE, Wav2Vec2 XLS-R, Hubert, WavLM, Whisper.
Author countries
China