Towards Reliable Audio Deepfake Attribution and Model Recognition: A Multi-Level Autoencoder-Based Framework

Authors: Andrea Di Pierno, Luca Guarnera, Dario Allegra, Sebastiano Battiato

Published: 2025-08-04 15:31:13+00:00

AI Summary

This paper introduces LAVA, a hierarchical framework for audio deepfake detection and model recognition. LAVA uses a convolutional autoencoder to extract latent representations from fake audio, which are then classified by two specialized classifiers for attribution and model recognition, achieving high F1-scores.

Abstract

The proliferation of audio deepfakes poses a growing threat to trust in digital communications. While detection methods have advanced, attributing audio deepfakes to their source models remains an underexplored yet crucial challenge. In this paper we introduce LAVA (Layered Architecture for Voice Attribution), a hierarchical framework for audio deepfake detection and model recognition that leverages attention-enhanced latent representations extracted by a convolutional autoencoder trained solely on fake audio. Two specialized classifiers operate on these features: Audio Deepfake Attribution (ADA), which identifies the generation technology, and Audio Deepfake Model Recognition (ADMR), which recognize the specific generative model instance. To improve robustness under open-set conditions, we incorporate confidence-based rejection thresholds. Experiments on ASVspoof2021, FakeOrReal, and CodecFake show strong performance: the ADA classifier achieves F1-scores over 95% across all datasets, and the ADMR module reaches 96.31% macro F1 across six classes. Additional tests on unseen attacks from ASVpoof2019 LA and error propagation analysis confirm LAVA's robustness and reliability. The framework advances the field by introducing a supervised approach to deepfake attribution and model recognition under open-set conditions, validated on public benchmarks and accompanied by publicly released models and code. Models and code are available at https://www.github.com/adipiz99/lava-framework.


Key findings
The ADA classifier achieved F1-scores over 95% across all datasets. The ADMR module reached 96.31% macro F1 across six classes. Tests on unseen attacks and error propagation analysis confirmed LAVA's robustness and reliability.
Approach
LAVA uses a convolutional autoencoder trained on fake audio to extract latent representations. These representations are then fed into two classifiers: one for attributing the deepfake to its source technology and another for recognizing the specific generative model. A confidence-based rejection threshold improves robustness.
Datasets
ASVspoof2021, FakeOrReal, CodecFake, ASVspoof2019 LA
Model(s)
Convolutional autoencoder, two specialized classifiers (Audio Deepfake Attribution and Audio Deepfake Model Recognition) with attention modules.
Author countries
Italy