Listen, Analyze, and Adapt to Learn New Attacks: An Exemplar-Free Class Incremental Learning Method for Audio Deepfake Source Tracing

Authors: Yang Xiao, Rohan Kumar Das

Published: 2025-05-20 16:51:52+00:00

AI Summary

This paper proposes AnaST, an exemplar-free class incremental learning method for audio deepfake source tracing. AnaST addresses catastrophic forgetting by updating the classifier with a closed-form analytical solution in one epoch, while keeping the feature extractor fixed, enabling efficient adaptation to new attacks without storing past data.

Abstract

As deepfake speech becomes common and hard to detect, it is vital to trace its source. Recent work on audio deepfake source tracing (ST) aims to find the origins of synthetic or manipulated speech. However, ST models must adapt to learn new deepfake attacks while retaining knowledge of the previous ones. A major challenge is catastrophic forgetting, where models lose the ability to recognize previously learned attacks. Some continual learning methods help with deepfake detection, but multi-class tasks such as ST introduce additional challenges as the number of classes grows. To address this, we propose an analytic class incremental learning method called AnaST. When new attacks appear, the feature extractor remains fixed, and the classifier is updated with a closed-form analytical solution in one epoch. This approach ensures data privacy, optimizes memory usage, and is suitable for online training. The experiments carried out in this work show that our method outperforms the baselines.


Key findings
AnaST outperforms baseline methods on both single and multi-dataset settings, achieving high accuracy with minimal forgetting. The method's efficiency is highlighted by its ability to adapt in a single epoch, eliminating the need for storing past data (exemplars) and significantly reducing computational overhead compared to exemplar-based methods.
Approach
AnaST uses an analytic class incremental learning approach. It freezes the feature extractor after initial training and updates the classifier using a recursive least-squares procedure and feature expansion to adapt to new attacks. This avoids catastrophic forgetting and the need to store past data.
Datasets
ASVspoof 2019 LA and WaveFake
Model(s)
RawNet2
Author countries
Australia, Singapore