Temporal Variability and Multi-Viewed Self-Supervised Representations to Tackle the ASVspoof5 Deepfake Challenge

Authors: Yuankun Xie, Xiaopeng Wang, Zhiyong Wang, Ruibo Fu, Zhengqi Wen, Haonan Cheng, Long Ye

Published: 2024-08-13 14:15:15+00:00

AI Summary

This paper tackles open-domain audio deepfake detection in the ASVspoof5 challenge. The authors introduce a novel data augmentation method, Frequency Mask, to address high-frequency gaps in the dataset and combine multiple self-supervised learning features with varied temporal information for improved robustness. Their approach achieves a minDCF of 0.0158 and an EER of 0.55% on the ASVspoof5 evaluation progress set.

Abstract

ASVspoof5, the fifth edition of the ASVspoof series, is one of the largest global audio security challenges. It aims to advance the development of countermeasure (CM) to discriminate bonafide and spoofed speech utterances. In this paper, we focus on addressing the problem of open-domain audio deepfake detection, which corresponds directly to the ASVspoof5 Track1 open condition. At first, we comprehensively investigate various CM on ASVspoof5, including data expansion, data augmentation, and self-supervised learning (SSL) features. Due to the high-frequency gaps characteristic of the ASVspoof5 dataset, we introduce Frequency Mask, a data augmentation method that masks specific frequency bands to improve CM robustness. Combining various scale of temporal information with multiple SSL features, our experiments achieved a minDCF of 0.0158 and an EER of 0.55% on the ASVspoof 5 Track 1 evaluation progress set.


Key findings
The proposed approach, integrating Frequency Mask data augmentation and a multi-scale temporal and feature fusion strategy, significantly improves deepfake detection performance, achieving a minDCF of 0.0158 and an EER of 0.55% on the ASVspoof5 evaluation progress set. However, performance degraded considerably on the full evaluation set, suggesting limitations in generalizability to unseen spoofing techniques.
Approach
The authors address the problem by combining several countermeasures: data expansion using ASVspoof2019LA, MLAAD, and Codecfake datasets; data augmentation including Frequency Mask (a novel method masking specific frequency bands), MUSAN, and RIR; and self-supervised learning feature extraction using WavLM, Wav2vec2-large, and UniSpeech. These countermeasures are fused using logit score fusion to improve performance.
Datasets
ASVspoof5 (training, development, and evaluation progress sets), ASVspoof2019LA, MLAAD, Codecfake, MUSAN, RIR
Model(s)
WavLM, Wav2vec2-large, UniSpeech, AASIST
Author countries
China