A Data-Driven Diffusion-based Approach for Audio Deepfake Explanations
Authors: Petr Grinberg, Ankur Kumar, Surya Koppisetti, Gaurav Bharaj
Published: 2025-06-03 22:10:53+00:00
AI Summary
This paper introduces a data-driven approach for explaining audio deepfakes using a diffusion model. It leverages the difference between real and vocoded audio as ground truth to train the model, identifying artifact regions in deepfake audio. Experimental results show this method outperforms traditional explainability techniques.
Abstract
Evaluating explainability techniques, such as SHAP and LRP, in the context of audio deepfake detection is challenging due to lack of clear ground truth annotations. In the cases when we are able to obtain the ground truth, we find that these methods struggle to provide accurate explanations. In this work, we propose a novel data-driven approach to identify artifact regions in deepfake audio. We consider paired real and vocoded audio, and use the difference in time-frequency representation as the ground-truth explanation. The difference signal then serves as a supervision to train a diffusion model to expose the deepfake artifacts in a given vocoded audio. Experimental results on the VocV4 and LibriSeVoc datasets demonstrate that our method outperforms traditional explainability techniques, both qualitatively and quantitatively.