Understanding the strengths and weaknesses of SSL models for audio deepfake model attribution
Authors: Gabriel Pîrlogeanu, Adriana Stan, Horia Cucu
Published: 2026-03-13 18:04:33+00:00
Comment: Accepted for publication at ICASSP 2026
AI Summary
This paper systematically investigates how self-supervised learning (SSL)-derived features capture architectural signatures in audio deepfakes for model attribution. By controlling multiple dimensions of the audio generation process, the authors reveal how subtle perturbations in model checkpoints, text prompts, vocoders, or speaker identity influence attribution. The study provides new insights into the robustness, biases, and limitations of SSL-based deepfake attribution, highlighting both its strengths and vulnerabilities.
Abstract
Audio deepfake model attribution aims to mitigate the misuse of synthetic speech by identifying the source model responsible for generating a given audio sample, enabling accountability and informing vendors. The task is challenging, but self-supervised learning (SSL)-derived acoustic features have demonstrated state-of-the-art attribution capabilities, yet the underlying factors driving their success and the limits of their discriminative power remain unclear. In this paper, we systematically investigate how SSL-derived features capture architectural signatures in audio deepfakes. By controlling multiple dimensions of the audio generation process we reveal how subtle perturbations in model checkpoints, text prompts, vocoders, or speaker identity influence attribution. Our results provide new insights into the robustness, biases, and limitations of SSL-based deepfake attribution, highlighting both its strengths and vulnerabilities in realistic scenarios.