Investigating self-supervised front ends for speech spoofing countermeasures
Authors: Xin Wang, Junichi Yamagishi
Published: 2021-11-15 12:52:50+00:00
AI Summary
This paper investigates using pre-trained self-supervised speech models as front-ends for speech spoofing countermeasures (CMs). The authors find that fine-tuning a well-chosen pre-trained front-end with a shallow or deep neural network back-end significantly improves performance on multiple datasets compared to a baseline system.
Abstract
Self-supervised speech model is a rapid progressing research topic, and many pre-trained models have been released and used in various down stream tasks. For speech anti-spoofing, most countermeasures (CMs) use signal processing algorithms to extract acoustic features for classification. In this study, we use pre-trained self-supervised speech models as the front end of spoofing CMs. We investigated different back end architectures to be combined with the self-supervised front end, the effectiveness of fine-tuning the front end, and the performance of using different pre-trained self-supervised models. Our findings showed that, when a good pre-trained front end was fine-tuned with either a shallow or a deep neural network-based back end on the ASVspoof 2019 logical access (LA) training set, the resulting CM not only achieved a low EER score on the 2019 LA test set but also significantly outperformed the baseline on the ASVspoof 2015, 2021 LA, and 2021 deepfake test sets. A sub-band analysis further demonstrated that the CM mainly used the information in a specific frequency band to discriminate the bona fide and spoofed trials across the test sets.