Enhancing Generalization in Audio Deepfake Detection: A Neural Collapse based Sampling and Training Approach

Authors: Mohammed Yousif, Jonat John Mathew, Huzaifa Pallan, Agamjeet Singh Padda, Syed Daniyal Shah, Sara Adamski, Madhu Reddiboina, Arjun Pankajakshan

Published: 2024-04-19 17:13:21+00:00

AI Summary

This paper proposes a neural collapse-based sampling approach for enhancing generalization in audio deepfake detection. By sampling confidently classified data points from pre-trained models on diverse datasets, it creates a smaller, more efficient training database that improves generalization on unseen data without the computational cost of training on massive datasets.

Abstract

Generalization in audio deepfake detection presents a significant challenge, with models trained on specific datasets often struggling to detect deepfakes generated under varying conditions and unknown algorithms. While collectively training a model using diverse datasets can enhance its generalization ability, it comes with high computational costs. To address this, we propose a neural collapse-based sampling approach applied to pre-trained models trained on distinct datasets to create a new training database. Using ASVspoof 2019 dataset as a proof-of-concept, we implement pre-trained models with Resnet and ConvNext architectures. Our approach demonstrates comparable generalization on unseen data while being computationally efficient, requiring less training data. Evaluation is conducted using the In-the-wild dataset.


Key findings
The proposed method shows comparable generalization performance on the In-the-wild dataset compared to models trained on larger datasets. The method is computationally efficient, requiring less training data. Experiments using different sampling rates demonstrate the impact of the amount of sampled data on model performance.
Approach
The approach uses pre-trained models (ResNet and ConvNext) on datasets like ASVspoof 2019. It leverages neural collapse theory to select representative samples based on the distance of their penultimate embeddings from class means, creating a new training database. A new model is then trained on this smaller database.
Datasets
ASVspoof 2019 (LA), FoR, Wavefake, In-the-wild
Model(s)
ResNet (18 and 9 residual blocks), ConvNext
Author countries
USA