Addressing Gradient Misalignment in Data-Augmented Training for Robust Speech Deepfake Detection

Authors: Duc-Tuan Truong, Tianchi Liu, Junjie Li, Ruijie Tao, Kong Aik Lee, Eng Siong Chng

Published: 2025-09-25 02:31:54+00:00

AI Summary

The paper investigates and addresses the issue of gradient misalignment between original and augmented inputs during data-augmented training for speech deepfake detection (SDD). They propose a Dual-Path Data-Augmented (DPDA) training framework that processes both inputs in parallel and utilizes gradient alignment techniques to resolve conflicting updates. This approach stabilizes optimization, accelerates convergence, and significantly improves robustness across benchmark datasets.

Abstract

In speech deepfake detection (SDD), data augmentation (DA) is commonly used to improve model generalization across varied speech conditions and spoofing attacks. However, during training, the backpropagated gradients from original and augmented inputs may misalign, which can result in conflicting parameter updates. These conflicts could hinder convergence and push the model toward suboptimal solutions, thereby reducing the benefits of DA. To investigate and address this issue, we design a dual-path data-augmented (DPDA) training framework with gradient alignment for SDD. In our framework, each training utterance is processed through two input paths: one using the original speech and the other with its augmented version. This design allows us to compare and align their backpropagated gradient directions to reduce optimization conflicts. Our analysis shows that approximately 25% of training iterations exhibit gradient conflicts between the original inputs and their augmented counterparts when using RawBoost augmentation. By resolving these conflicts with gradient alignment, our method accelerates convergence by reducing the number of training epochs and achieves up to an 18.69% relative reduction in Equal Error Rate on the In-the-Wild dataset compared to the baseline.


Key findings
Analysis revealed that the DPDA baseline experiences gradient conflicts in approximately 25% of training iterations when using RawBoost augmentation. By resolving these conflicts with gradient alignment (PCGrad), the method accelerates convergence by 43% and achieves up to an 18.69% relative reduction in EER on the In-the-Wild dataset compared to the baseline. The approach showed consistent performance improvement across all tested SDD architectures and diverse data augmentation strategies.
Approach
The proposed DPDA framework feeds both the original and augmented versions of an utterance into the shared SDD model simultaneously, computing separate gradients. Gradient alignment methods, such as PCGrad, GradVac, or CAGrad, are then applied to adjust parameter updates whenever the original and augmented gradients conflict (i.e., point in opposing directions). This ensures consistent optimization toward spoof-related cues rather than augmentation artifacts.
Datasets
ASVspoof2019 Logical Access (LA), ASVspoof2021 DF (21DF), In-the-Wild (ITW), Fake-or-Real (FoR) norm-test subset, MUSAN, RIR.
Model(s)
XLSR-AASIST, XLSR-Conformer-TCM, XLSR-Mamba
Author countries
Singapore, Hong Kong