Listen, Analyze, and Adapt to Learn New Attacks: An Exemplar-Free Class Incremental Learning Method for Audio Deepfake Source Tracing
Authors: Yang Xiao, Rohan Kumar Das
Published: 2025-05-20 16:51:52+00:00
AI Summary
This paper proposes AnaST, an exemplar-free class incremental learning method for audio deepfake source tracing. AnaST addresses catastrophic forgetting by updating the classifier with a closed-form analytical solution in one epoch, while keeping the feature extractor fixed, enabling efficient adaptation to new attacks without storing past data.
Abstract
As deepfake speech becomes common and hard to detect, it is vital to trace its source. Recent work on audio deepfake source tracing (ST) aims to find the origins of synthetic or manipulated speech. However, ST models must adapt to learn new deepfake attacks while retaining knowledge of the previous ones. A major challenge is catastrophic forgetting, where models lose the ability to recognize previously learned attacks. Some continual learning methods help with deepfake detection, but multi-class tasks such as ST introduce additional challenges as the number of classes grows. To address this, we propose an analytic class incremental learning method called AnaST. When new attacks appear, the feature extractor remains fixed, and the classifier is updated with a closed-form analytical solution in one epoch. This approach ensures data privacy, optimizes memory usage, and is suitable for online training. The experiments carried out in this work show that our method outperforms the baselines.