Audio Deepfake Detection Based on a Combination of F0 Information and Real Plus Imaginary Spectrogram Features

Authors: Jun Xue, Cunhang Fan, Zhao Lv, Jianhua Tao, Jiangyan Yi, Chengshi Zheng, Zhengqi Wen, Minmin Yuan, Shegang Shao

Published: 2022-08-02 02:46:16+00:00

AI Summary

This paper proposes a novel audio deepfake detection system that combines fundamental frequency (F0) information and real plus imaginary spectrogram features. By utilizing the differences in F0 distribution between real and fake speech and modeling disjoint subbands separately, the system achieves a significantly lower equivalent error rate (EER) than existing systems.

Abstract

Recently, pioneer research works have proposed a large number of acoustic features (log power spectrogram, linear frequency cepstral coefficients, constant Q cepstral coefficients, etc.) for audio deepfake detection, obtaining good performance, and showing that different subbands have different contributions to audio deepfake detection. However, this lacks an explanation of the specific information in the subband, and these features also lose information such as phase. Inspired by the mechanism of synthetic speech, the fundamental frequency (F0) information is used to improve the quality of synthetic speech, while the F0 of synthetic speech is still too average, which differs significantly from that of real speech. It is expected that F0 can be used as important information to discriminate between bonafide and fake speech, while this information cannot be used directly due to the irregular distribution of F0. Insteadly, the frequency band containing most of F0 is selected as the input feature. Meanwhile, to make full use of the phase and full-band information, we also propose to use real and imaginary spectrogram features as complementary input features and model the disjoint subbands separately. Finally, the results of F0, real and imaginary spectrogram features are fused. Experimental results on the ASVspoof 2019 LA dataset show that our proposed system is very effective for the audio deepfake detection task, achieving an equivalent error rate (EER) of 0.43%, which surpasses almost all systems.


Key findings
The proposed system achieves an EER of 0.43% on the ASVspoof 2019 LA dataset, surpassing almost all existing systems. The F0 subband and the combined use of real and imaginary spectrogram subbands proved highly effective in distinguishing real from fake speech. The two-stage fusion strategy further enhanced the overall performance.
Approach
The approach uses a two-stage fusion framework. First, it models low-frequency subbands of the imaginary spectrogram and high-frequency subbands of the real spectrogram separately using SENet. Second, it fuses the results with F0 information (extracted from a specific frequency band) to improve deepfake detection accuracy.
Datasets
ASVspoof 2019 LA dataset
Model(s)
SENet34
Author countries
China