Audio Deepfake Detection with Self-Supervised WavLM and Multi-Fusion Attentive Classifier
Authors: Yinlin Guo, Haofan Huang, Xi Chen, He Zhao, Yuehai Wang
Published: 2023-12-13 12:09:15+00:00
AI Summary
This paper proposes a novel audio deepfake detection method combining the self-supervised WavLM model for feature extraction and a Multi-Fusion Attentive (MFA) classifier for improved spoofing detection. The MFA classifier leverages complementary information from audio features at both time and layer levels, achieving state-of-the-art results on the ASVspoof 2021 DF set.
Abstract
With the rapid development of speech synthesis and voice conversion technologies, Audio Deepfake has become a serious threat to the Automatic Speaker Verification (ASV) system. Numerous countermeasures are proposed to detect this type of attack. In this paper, we report our efforts to combine the self-supervised WavLM model and Multi-Fusion Attentive classifier for audio deepfake detection. Our method exploits the WavLM model to extract features that are more conducive to spoofing detection for the first time. Then, we propose a novel Multi-Fusion Attentive (MFA) classifier based on the Attentive Statistics Pooling (ASP) layer. The MFA captures the complementary information of audio features at both time and layer levels. Experiments demonstrate that our methods achieve state-of-the-art results on the ASVspoof 2021 DF set and provide competitive results on the ASVspoof 2019 and 2021 LA set.