ASVspoof 2021: Towards Spoofed and Deepfake Speech Detection in the Wild

Authors: Xuechen Liu, Xin Wang, Md Sahidullah, Jose Patino, Héctor Delgado, Tomi Kinnunen, Massimiliano Todisco, Junichi Yamagishi, Nicholas Evans, Andreas Nautsch, Kong Aik Lee

Published: 2022-10-05 17:57:29+00:00

AI Summary

The ASVspoof 2021 challenge benchmarked speech spoofing and deepfake detection systems under more realistic conditions, including encoding, transmission effects, and real-world acoustic environments. Results revealed varying levels of robustness across tasks, highlighting challenges in generalization to unseen data and conditions.

Abstract

Benchmarking initiatives support the meaningful comparison of competing solutions to prominent problems in speech and language processing. Successive benchmarking evaluations typically reflect a progressive evolution from ideal lab conditions towards to those encountered in the wild. ASVspoof, the spoofing and deepfake detection initiative and challenge series, has followed the same trend. This article provides a summary of the ASVspoof 2021 challenge and the results of 54 participating teams that submitted to the evaluation phase. For the logical access (LA) task, results indicate that countermeasures are robust to newly introduced encoding and transmission effects. Results for the physical access (PA) task indicate the potential to detect replay attacks in real, as opposed to simulated physical spaces, but a lack of robustness to variations between simulated and real acoustic environments. The Deepfake (DF) task, new to the 2021 edition, targets solutions to the detection of manipulated, compressed speech data posted online. While detection solutions offer some resilience to compression effects, they lack generalization across different source datasets. In addition to a summary of the top-performing systems for each task, new analyses of influential data factors and results for hidden data subsets, the article includes a review of post-challenge results, an outline of the principal challenge limitations and a road-map for the future of ASVspoof.


Key findings
Countermeasures showed robustness to encoding and transmission effects in the logical access task, but struggled with generalization across different source datasets in the deepfake task. The physical access task proved most challenging due to the mismatch between simulated training data and real-world evaluation data. The reliance on non-speech segments was identified as a potential limitation.
Approach
The challenge evaluated various approaches submitted by participating teams. These approaches focused on detecting artifacts introduced by spoofing methods and employed techniques like data augmentation, various acoustic features (e.g., LFCCs, spectrograms), and ensemble classifiers (e.g., ResNet, LSTM, GMM).
Datasets
ASVspoof 2019 LA evaluation database (derived from VCTK), ASVspoof 2019 PA training and evaluation partitions, VCC 2018 and 2020 databases (DAPS and EMIME corpora).
Model(s)
A wide range of models were used, including ResNet, LSTM, LightCNN, SEnet, TDNN, MLP, GMM, and VAE. Many teams used ensemble methods combining multiple models.
Author countries
France, Japan, Spain, Singapore, Finland