Lightweight Resolution-Aware Audio Deepfake Detection via Cross-Scale Attention and Consistency Learning

Authors: K. A. Shahriar

Published: 2026-01-10 13:08:58+00:00

AI Summary

This paper introduces a lightweight resolution-aware audio deepfake detection framework that explicitly models and aligns multi-resolution spectral representations. It utilizes cross-scale attention and consistency learning to enhance robustness under channel distortions, replay attacks, and real-world recording conditions. The approach achieves strong performance on various benchmarks while maintaining computational efficiency.

Abstract

Audio deepfake detection has become increasingly challenging due to rapid advances in speech synthesis and voice conversion technologies, particularly under channel distortions, replay attacks, and real-world recording conditions. This paper proposes a resolution-aware audio deepfake detection framework that explicitly models and aligns multi-resolution spectral representations through cross-scale attention and consistency learning. Unlike conventional single-resolution or implicit feature-fusion approaches, the proposed method enforces agreement across complementary time--frequency scales. The proposed framework is evaluated on three representative benchmarks: ASVspoof 2019 (LA and PA), the Fake-or-Real (FoR) dataset, and the In-the-Wild Audio Deepfake dataset under a speaker-disjoint protocol. The method achieves near-perfect performance on ASVspoof LA (EER 0.16%), strong robustness on ASVspoof PA (EER 5.09%), FoR rerecorded audio (EER 4.54%), and in-the-wild deepfakes (AUC 0.98, EER 4.81%), significantly outperforming single-resolution and non-attention baselines under challenging conditions. The proposed model remains lightweight and efficient, requiring only 159k parameters and less than 1~GFLOP per inference, making it suitable for practical deployment. Comprehensive ablation studies confirm the critical contributions of cross-scale attention and consistency learning, while gradient-based interpretability analysis reveals that the model learns resolution-consistent and semantically meaningful spectral cues across diverse spoofing conditions. These results demonstrate that explicit cross-resolution modeling provides a principled, robust, and scalable foundation for next-generation audio deepfake detection systems.


Key findings
The proposed framework achieved near-perfect performance on ASVspoof LA (EER 0.16%) and demonstrated strong robustness on ASVspoof PA (EER 5.09%), FoR rerecorded audio (EER 4.54%), and in-the-wild deepfakes (AUC 0.98, EER 4.81%). It significantly outperformed single-resolution and non-attention baselines while remaining lightweight with only 159k parameters and low computational cost.
Approach
The method addresses audio deepfake detection by extracting multi-resolution log-Mel spectrograms from input audio, which are then processed by a shared convolutional encoder. A cross-scale attention module dynamically fuses these resolution-specific features, and a consistency learning objective enforces alignment among embeddings for bona fide speech, promoting resolution-invariant representations.
Datasets
ASVspoof 2019 (Logical Access and Physical Access), Fake-or-Real (FoR) dataset (normalized, two-second, and rerecorded versions), In-the-Wild Audio Deepfake dataset.
Model(s)
Shared Convolutional Encoder (consisting of convolutional layers with ReLU activations and adaptive average pooling), Multi-head Self-Attention module (for cross-scale attention), and a lightweight linear classifier (Classification Head).
Author countries
Bangladesh