Neural Codec Source Tracing: Toward Comprehensive Attribution in Open-Set Condition
Authors: Yuankun Xie, Xiaopeng Wang, Zhiyong Wang, Ruibo Fu, Zhengqi Wen, Songjun Cao, Long Ma, Chenxing Li, Haonnan Cheng, Long Ye
Published: 2025-01-11 11:15:58+00:00
AI Summary
This paper introduces the Neural Codec Source Tracing (NCST) task for open-set audio deepfake detection, encompassing both neural codec classification and ALM detection. A new dataset, ST-Codecfake, is created to benchmark NCST models under open-set conditions, revealing limitations in classifying unseen real audio despite strong performance on in-distribution and out-of-distribution tasks.
Abstract
Current research in audio deepfake detection is gradually transitioning from binary classification to multi-class tasks, referred as audio deepfake source tracing task. However, existing studies on source tracing consider only closed-set scenarios and have not considered the challenges posed by open-set conditions. In this paper, we define the Neural Codec Source Tracing (NCST) task, which is capable of performing open-set neural codec classification and interpretable ALM detection. Specifically, we constructed the ST-Codecfake dataset for the NCST task, which includes bilingual audio samples generated by 11 state-of-the-art neural codec methods and ALM-based out-ofdistribution (OOD) test samples. Furthermore, we establish a comprehensive source tracing benchmark to assess NCST models in open-set conditions. The experimental results reveal that although the NCST models perform well in in-distribution (ID) classification and OOD detection, they lack robustness in classifying unseen real audio. The ST-codecfake dataset and code are available.