Spoofed training data for speech spoofing countermeasure can be efficiently created using neural vocoders
Authors: Xin Wang, Junichi Yamagishi
Published: 2022-10-19 14:10:02+00:00
AI Summary
This paper proposes a method for efficiently creating spoofed training data for speech spoofing countermeasures using neural vocoders, instead of relying on computationally expensive TTS and VC systems. A contrastive feature loss is introduced to improve the training process by leveraging the relationship between bona fide and spoofed data pairs.
Abstract
A good training set for speech spoofing countermeasures requires diverse TTS and VC spoofing attacks, but generating TTS and VC spoofed trials for a target speaker may be technically demanding. Instead of using full-fledged TTS and VC systems, this study uses neural-network-based vocoders to do copy-synthesis on bona fide utterances. The output data can be used as spoofed data. To make better use of pairs of bona fide and spoofed data, this study introduces a contrastive feature loss that can be plugged into the standard training criterion. On the basis of the bona fide trials from the ASVspoof 2019 logical access training set, this study empirically compared a few training sets created in the proposed manner using a few neural non-autoregressive vocoders. Results on multiple test sets suggest good practices such as fine-tuning neural vocoders using bona fide data from the target domain. The results also demonstrated the effectiveness of the contrastive feature loss. Combining the best practices, the trained CM achieved overall competitive performance. Its EERs on the ASVspoof 2021 hidden subsets also outperformed the top-1 challenge submission.