Towards Neural Audio Codec Source Parsing

Authors: Orchid Chetia Phukan, Girish, Mohd Mujtaba Akhtar, Arun Balaji Buduru, Rajesh Sharma

Published: 2025-06-14 21:00:39+00:00

AI Summary

This paper introduces Neural Audio Codec Source Parsing (NACSP), a novel approach to audio deepfake detection that focuses on regressing codec parameters instead of binary classification. They propose HYDRA, a framework using hyperbolic geometry to disentangle latent features from pre-trained models, improving multi-task generalization for parameter prediction.

Abstract

A new class of audio deepfakes-codecfakes (CFs)-has recently caught attention, synthesized by Audio Language Models that leverage neural audio codecs (NACs) in the backend. In response, the community has introduced dedicated benchmarks and tailored detection strategies. As the field advances, efforts have moved beyond binary detection toward source attribution, including open-set attribution, which aims to identify the NAC responsible for generation and flag novel, unseen ones during inference. This shift toward source attribution improves forensic interpretability and accountability. However, open-set attribution remains fundamentally limited: while it can detect that a NAC is unfamiliar, it cannot characterize or identify individual unseen codecs. It treats such inputs as generic ``unknowns'', lacking insight into their internal configuration. This leads to major shortcomings: limited generalization to new NACs and inability to resolve fine-grained variations within NAC families. To address these gaps, we propose Neural Audio Codec Source Parsing (NACSP) - a paradigm shift that reframes source attribution for CFs as structured regression over generative NAC parameters such as quantizers, bandwidth, and sampling rate. We formulate NACSP as a multi-task regression task for predicting these NAC parameters and establish the first comprehensive benchmark using various state-of-the-art speech pre-trained models (PTMs). To this end, we propose HYDRA, a novel framework that leverages hyperbolic geometry to disentangle complex latent properties from PTM representations. By employing task-specific attention over multiple curvature-aware hyperbolic subspaces, HYDRA enables superior multi-task generalization. Our extensive experiments show HYDRA achieves top results on benchmark CFs datasets compared to baselines operating in Euclidean space.


Key findings
HYDRA consistently outperforms baselines in both closed-set and open-set settings on benchmark datasets. The choice of pre-trained model has limited impact on performance, while HYDRA significantly improves accuracy across all tasks. The results establish a new state-of-the-art for NACSP.
Approach
NACSP reframes source attribution as structured regression over neural audio codec (NAC) parameters (quantizers, bandwidth, sampling rate). HYDRA, a novel framework, uses hyperbolic geometry to disentangle latent properties from pre-trained model representations for improved multi-task regression.
Datasets
ST-codecfake, CodecFake
Model(s)
Various state-of-the-art speech pre-trained models (PTMs) including WavLM, Unispeech-SAT, Wav2vec2, XLS-R, Whisper, MMS, x-vector, and ECAPA; a CNN-based downstream network; HYDRA framework.
Author countries
India, Estonia