Towards Scalable AASIST: Refining Graph Attention for Speech Deepfake Detection
Authors: Ivan Viakhirev, Daniil Sirota, Aleksandr Smirnov, Kirill Borodin
Published: 2025-07-15 22:31:43+00:00
AI Summary
This paper refines the AASIST architecture for speech deepfake detection by freezing a Wav2Vec 2.0 encoder, replacing graph attention with multi-head attention, and using a trainable fusion layer. These modifications achieve a 7.6% equal error rate (EER) on the ASVspoof 5 corpus, significantly improving upon the baseline.
Abstract
Advances in voice conversion and text-to-speech synthesis have made automatic speaker verification (ASV) systems more susceptible to spoofing attacks. This work explores modest refinements to the AASIST anti-spoofing architecture. It incorporates a frozen Wav2Vec 2.0 encoder to retain self-supervised speech representations in limited-data settings, substitutes the original graph attention block with a standardized multi-head attention module using heterogeneous query projections, and replaces heuristic frame-segment fusion with a trainable, context-aware integration layer. When evaluated on the ASVspoof 5 corpus, the proposed system reaches a 7.6% equal error rate (EER), improving on a re-implemented AASIST baseline under the same training conditions. Ablation experiments suggest that each architectural change contributes to the overall performance, indicating that targeted adjustments to established models may help strengthen speech deepfake detection in practical scenarios. The code is publicly available at https://github.com/KORALLLL/AASIST_SCALING.