Continual Audio Deepfake Detection via Universal Adversarial Perturbation
Authors: Wangjie Li, Lin Li, Qingyang Hong
Published: 2025-11-25 06:41:11+00:00
AI Summary
This paper introduces a novel framework for continual audio deepfake detection that leverages Universal Adversarial Perturbation (UAP). This approach allows models to retain knowledge of historical spoofing distributions without needing direct access to past data, addressing the challenge of evolving deepfake attacks and high fine-tuning costs. By integrating UAP with pre-trained self-supervised audio models, the method offers an efficient solution for continual learning.
Abstract
The rapid advancement of speech synthesis and voice conversion technologies has raised significant security concerns in multimedia forensics. Although current detection models demonstrate impressive performance, they struggle to maintain effectiveness against constantly evolving deepfake attacks. Additionally, continually fine-tuning these models using historical training data incurs substantial computational and storage costs. To address these limitations, we propose a novel framework that incorporates Universal Adversarial Perturbation (UAP) into audio deepfake detection, enabling models to retain knowledge of historical spoofing distribution without direct access to past data. Our method integrates UAP seamlessly with pre-trained self-supervised audio models during fine-tuning. Extensive experiments validate the effectiveness of our approach, showcasing its potential as an efficient solution for continual learning in audio deepfake detection.