Towards Out-of-Distribution Detection in Vocoder Recognition via Latent Feature Reconstruction

Authors: Renmingyue Du, Jixun Yao, Qiuqiang Kong, Yin Cao

Published: 2024-06-04 11:55:11+00:00

AI Summary

This paper proposes a reconstruction-based approach for out-of-distribution (OOD) detection in vocoder recognition using an autoencoder with multiple decoders, one for each vocoder class. If none of the decoders can reconstruct an input feature satisfactorily, it's classified as OOD. Contrastive learning and an auxiliary classifier enhance the approach's performance.

Abstract

Advancements in synthesized speech have created a growing threat of impersonation, making it crucial to develop deepfake algorithm recognition. One significant aspect is out-of-distribution (OOD) detection, which has gained notable attention due to its important role in deepfake algorithm recognition. However, most of the current approaches for detecting OOD in deepfake algorithm recognition rely on probability-score or classified-distance, which may lead to limitations in the accuracy of the sample at the edge of the threshold. In this study, we propose a reconstruction-based detection approach that employs an autoencoder architecture to compress and reconstruct the acoustic feature extracted from a pre-trained WavLM model. Each acoustic feature belonging to a specific vocoder class is only aptly reconstructed by its corresponding decoder. When none of the decoders can satisfactorily reconstruct a feature, it is classified as an OOD sample. To enhance the distinctiveness of the reconstructed features by each decoder, we incorporate contrastive learning and an auxiliary classifier to further constrain the reconstructed feature. Experiments demonstrate that our proposed approach surpasses baseline systems by a relative margin of 10% in the evaluation dataset. Ablation studies further validate the effectiveness of both the contrastive constraint and the auxiliary classifier within our proposed approach.


Key findings
The proposed reconstruction-based approach outperforms baseline systems (ECAPA-TDNN, AASIST, RawNet2) by a relative margin of 10% in F1 score on the evaluation dataset. Ablation studies confirmed the effectiveness of both contrastive learning and the auxiliary classifier.
Approach
The authors use an autoencoder architecture with a single encoder and multiple decoders, one for each vocoder type. The encoder compresses acoustic features (from WavLM), and each decoder reconstructs features from its corresponding vocoder class. OOD samples are identified if reconstruction error exceeds a threshold across all decoders.
Datasets
WaveFake dataset (containing audio from seven vocoders: MelGAN, FullBand-MelGAN, MelGAN-Large, MultiBand-MelGAN, HiFi-GAN, Parallel WaveGAN, and WaveGlow), BigVGAN, and UnivNet for OOD samples.
Model(s)
WavLM (pre-trained model for acoustic feature extraction), autoencoder with multiple decoders, auxiliary classifier.
Author countries
China, China, Hong Kong