Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing Detection

Authors: Zihan Pan, Tianchi Liu, Hardik B. Sailor, Qiongqiong Wang

Published: 2024-06-12 08:27:44+00:00

AI Summary

This paper investigates the use of the WavLM model for anti-spoofing detection, proposing an attentive merging method to combine hierarchical hidden embeddings from multiple transformer layers. The approach achieves state-of-the-art equal error rates (EERs) on ASVspoof datasets, demonstrating the effectiveness of this method and the importance of early transformer layers.

Abstract

Self-supervised learning (SSL) speech representation models, trained on large speech corpora, have demonstrated effectiveness in extracting hierarchical speech embeddings through multiple transformer layers. However, the behavior of these embeddings in specific tasks remains uncertain. This paper investigates the multi-layer behavior of the WavLM model in anti-spoofing and proposes an attentive merging method to leverage the hierarchical hidden embeddings. Results demonstrate the feasibility of fine-tuning WavLM to achieve the best equal error rate (EER) of 0.65%, 3.50%, and 3.19% on the ASVspoof 2019LA, 2021LA, and 2021DF evaluation sets, respectively. Notably, We find that the early hidden transformer layers of the WavLM large model contribute significantly to anti-spoofing task, enabling computational efficiency by utilizing a partial pre-trained model.


Key findings
Early layers of the WavLM model contribute significantly to anti-spoofing performance. The proposed attentive merging method achieves state-of-the-art EERs on ASVspoof 2019LA, 2021LA, and 2021DF evaluation sets. Using a subset of layers improves computational efficiency without sacrificing performance.
Approach
The authors propose an attentive merging (AttM) method that combines hidden embeddings from multiple WavLM transformer layers. This method uses a two-step squeezing process followed by an attention mechanism to weight the importance of each layer before merging and feeding to a classifier (LSTM or ECAPA-TDNN).
Datasets
ASVspoof 2019LA (training and development), ASVspoof 2019LA evaluation, ASVspoof 2021LA evaluation, ASVspoof 2021DF evaluation
Model(s)
WavLM (large and base), LSTM, ECAPA-TDNN
Author countries
Singapore