Deepfake Detection System for the ADD Challenge Track 3.2 Based on Score Fusion

Authors: Yuxiang Zhang, Jingze Lu, Xingming Wang, Zhuo Li, Runqiu Xiao, Wenchao Wang, Ming Li, Pengyuan Zhang

Published: 2022-10-13 08:04:29+00:00

AI Summary

This paper presents a deepfake audio detection system for the ADD Challenge Track 3.2, using score-level fusion of multiple light convolutional neural networks (LCNNs). The system incorporates various front-ends and online data augmentation, achieving a weighted equal error rate (WEER) of 11.04%, a top result in the challenge.

Abstract

This paper describes the deepfake audio detection system submitted to the Audio Deep Synthesis Detection (ADD) Challenge Track 3.2 and gives an analysis of score fusion. The proposed system is a score-level fusion of several light convolutional neural network (LCNN) based models. Various front-ends are used as input features, including low-frequency short-time Fourier transform and Constant Q transform. Due to the complex noise and rich synthesis algorithms, it is difficult to obtain the desired performance using the training set directly. Online data augmentation methods effectively improve the robustness of fake audio detection systems. In particular, the reasons for the poor improvement of score fusion are explored through visualization of the score distributions and comparison with score distribution on another dataset. The overfitting of the model to the training set leads to extreme values of the scores and low correlation of the score distributions, which makes score fusion difficult. Fusion with partially fake audio detection system improves system performance further. The submission on track 3.2 obtained the weighted equal error rate (WEER) of 11.04%, which is one of the best performing systems in the challenge.


Key findings
The proposed system achieved a competitive WEER of 11.04% in the ADD Challenge. Data augmentation significantly improved performance. Analysis revealed that poor score fusion performance stemmed from overfitting, leading to extreme and poorly correlated score distributions across models.
Approach
The authors employ a score-level fusion approach, combining the outputs of several LCNN-based models trained on different audio features (like STFT and CQT). Online data augmentation techniques improve robustness, and analysis of score distributions helps understand the limitations of fusion.
Datasets
ADD 2022 Challenge datasets (training, development, and test sets for tracks 1 and 3.2), MUSAN, and RIRs datasets for data augmentation.
Model(s)
Light Convolutional Neural Networks (LCNNs) with AM-Softmax and Center Loss functions.
Author countries
China, USA