Representation Selective Self-distillation and wav2vec 2.0 Feature Exploration for Spoof-aware Speaker Verification

Authors: Jin Woo Lee, Eungbeom Kim, Junghyun Koo, Kyogu Lee

Published: 2022-04-06 07:47:36+00:00

AI Summary

This paper investigates effective feature spaces for spoof detection using wav2vec 2.0, finding that the 5th layer's features are optimal. A simple attentive statistics pooling (ASP) layer as the backend achieves a 0.31% EER on ASVspoof 2019 LA, and a proposed spoof-aware speaker verification (SASV) method achieves 1.08% EER on the SASV Challenge 2022 database.

Abstract

Text-to-speech and voice conversion studies are constantly improving to the extent where they can produce synthetic speech almost indistinguishable from bona fide human speech. In this regard, the importance of countermeasures (CM) against synthetic voice attacks of the automatic speaker verification (ASV) systems emerges. Nonetheless, most end-to-end spoofing detection networks are black-box systems, and the answer to what is an effective representation for finding artifacts remains veiled. In this paper, we examine which feature space can effectively represent synthetic artifacts using wav2vec 2.0, and study which architecture can effectively utilize the space. Our study allows us to analyze which attribute of speech signals is advantageous for the CM systems. The proposed CM system achieved 0.31% equal error rate (EER) on ASVspoof 2019 LA evaluation set for the spoof detection task. We further propose a simple yet effective spoofing aware speaker verification (SASV) method, which takes advantage of the disentangled representations from our countermeasure system. Evaluation performed with the SASV Challenge 2022 database show 1.08% of SASV EER. Quantitative analysis shows that using the explored feature space of wav2vec 2.0 advantages both spoofing CM and SASV.


Key findings
Using the 5th layer of XLSR-53 as a feature extractor significantly improves spoofing detection performance. A simple ASP backend outperforms more complex models like AASIST. The proposed RSSD method achieves state-of-the-art results in spoof-aware speaker verification.
Approach
The authors explore different layers of a pre-trained wav2vec 2.0 model (XLSR-53) as feature extractors for a spoofing countermeasure system. They then evaluate various lightweight back-end architectures (MLP, ASP) with these features, and propose a novel representation selective self-distillation (RSSD) module for spoof-aware speaker verification.
Datasets
ASVspoof 2019 LA, SASV Challenge 2022
Model(s)
wav2vec 2.0 (XLSR-53), Multilayer Perceptron (MLP), Attentive Statistics Pooling (ASP), ECAPA-TDNN
Author countries
South Korea