A Data-Driven Diffusion-based Approach for Audio Deepfake Explanations

Authors: Petr Grinberg, Ankur Kumar, Surya Koppisetti, Gaurav Bharaj

Published: 2025-06-03 22:10:53+00:00

AI Summary

This paper introduces a data-driven approach for explaining audio deepfakes using a diffusion model. It leverages the difference between real and vocoded audio as ground truth to train the model, identifying artifact regions in deepfake audio. Experimental results show this method outperforms traditional explainability techniques.

Abstract

Evaluating explainability techniques, such as SHAP and LRP, in the context of audio deepfake detection is challenging due to lack of clear ground truth annotations. In the cases when we are able to obtain the ground truth, we find that these methods struggle to provide accurate explanations. In this work, we propose a novel data-driven approach to identify artifact regions in deepfake audio. We consider paired real and vocoded audio, and use the difference in time-frequency representation as the ground-truth explanation. The difference signal then serves as a supervision to train a diffusion model to expose the deepfake artifacts in a given vocoded audio. Experimental results on the VocV4 and LibriSeVoc datasets demonstrate that our method outperforms traditional explainability techniques, both qualitatively and quantitatively.


Key findings
The proposed diffusion-based approach significantly outperforms traditional explainability methods (DeepSHAP, GradientSHAP, AttnLRP) both qualitatively and quantitatively in identifying deepfake artifacts. The method generalizes well across different datasets (VocV4 and LibriSeVoc) and demonstrates superior alignment with ground truth explanations.
Approach
The authors use paired real and vocoded audio to generate ground truth explanations by calculating the difference in their time-frequency representations. A diffusion model (SegDiff) is trained on these differences to predict artifact regions in new vocoded audio, offering explanations for audio deepfake detection.
Datasets
VocV4 and LibriSeVoc datasets
Model(s)
SegDiff (diffusion model), Wav2Vec2-AASIST (for feature extraction in ADDSegDiff), and classical XAI techniques (DeepSHAP, GradientSHAP, AttnLRP)
Author countries
Switzerland, USA