One-Class Learning with Adaptive Centroid Shift for Audio Deepfake Detection
Authors: Hyun Myung Kim, Kangwook Jang, Hoirin Kim
Published: 2024-06-24 15:21:50+00:00
AI Summary
This paper proposes a novel adaptive centroid shift (ACS) method for audio deepfake detection using one-class learning. ACS updates the centroid representation using only bonafide samples, creating a robust model against unseen spoofing attacks. The method achieves a state-of-the-art equal error rate (EER) of 2.19% on the ASVspoof 2021 deepfake dataset.
Abstract
As speech synthesis systems continue to make remarkable advances in recent years, the importance of robust deepfake detection systems that perform well in unseen systems has grown. In this paper, we propose a novel adaptive centroid shift (ACS) method that updates the centroid representation by continually shifting as the weighted average of bonafide representations. Our approach uses only bonafide samples to define their centroid, which can yield a specialized centroid for one-class learning. Integrating our ACS with one-class learning gathers bonafide representations into a single cluster, forming well-separated embeddings robust to unseen spoofing attacks. Our proposed method achieves an equal error rate (EER) of 2.19% on the ASVspoof 2021 deepfake dataset, outperforming all existing systems. Furthermore, the t-SNE visualization illustrates that our method effectively maps the bonafide embeddings into a single cluster and successfully disentangles the bonafide and spoof classes.