Towards Out-of-Distribution Detection in Vocoder Recognition via Latent Feature Reconstruction

Authors: Renmingyue Du, Jixun Yao, Qiuqiang Kong, Yin Cao

Published: 2024-06-04 11:55:11+00:00

Comment: 5 pages, 4 figures

AI Summary

This study proposes a reconstruction-based approach for out-of-distribution (OOD) detection in vocoder recognition, addressing limitations of probability-score or classified-distance methods. It employs an autoencoder where acoustic features, extracted by a pre-trained WavLM model, are reconstructed by decoders specific to vocoder classes. Samples are classified as OOD if none of the decoders can satisfactorily reconstruct their features, with contrastive learning and an auxiliary classifier enhancing distinctiveness.

Abstract

Advancements in synthesized speech have created a growing threat of impersonation, making it crucial to develop deepfake algorithm recognition. One significant aspect is out-of-distribution (OOD) detection, which has gained notable attention due to its important role in deepfake algorithm recognition. However, most of the current approaches for detecting OOD in deepfake algorithm recognition rely on probability-score or classified-distance, which may lead to limitations in the accuracy of the sample at the edge of the threshold. In this study, we propose a reconstruction-based detection approach that employs an autoencoder architecture to compress and reconstruct the acoustic feature extracted from a pre-trained WavLM model. Each acoustic feature belonging to a specific vocoder class is only aptly reconstructed by its corresponding decoder. When none of the decoders can satisfactorily reconstruct a feature, it is classified as an OOD sample. To enhance the distinctiveness of the reconstructed features by each decoder, we incorporate contrastive learning and an auxiliary classifier to further constrain the reconstructed feature. Experiments demonstrate that our proposed approach surpasses baseline systems by a relative margin of 10\\% in the evaluation dataset. Ablation studies further validate the effectiveness of both the contrastive constraint and the auxiliary classifier within our proposed approach.


Key findings
The proposed reconstruction-based approach significantly surpasses baseline systems (ECAPA-TDNN, AASIST, RawNet2) in vocoder recognition OOD detection, achieving a 68.04% F1 score, a relative margin of 10%. Ablation studies confirm the effectiveness of both the contrastive loss and the auxiliary classifier in enhancing performance and distinctiveness. The use of WavLM's intermediate layers (specifically 'weighted-18') for feature extraction was found to yield the best results.
Approach
The proposed method uses an autoencoder architecture consisting of an encoder and multiple decoders, each trained to reconstruct acoustic features from a specific vocoder class. During inference, if an input feature cannot be effectively reconstructed by any decoder (i.e., reconstruction error exceeds a threshold), it is classified as an OOD sample. Contrastive learning and an auxiliary classifier are incorporated to improve the distinctiveness of reconstructed features and align encoder outputs with their classes.
Datasets
WaveFake dataset (generated from LjSpeech), BigvGAN, UnivNet
Model(s)
Autoencoder (encoder with transformer modules, multiple decoders), WavLM (pre-trained feature extractor), Auxiliary Classifier
Author countries
China