From Sharpness to Better Generalization for Speech Deepfake Detection

Authors: Wen Huang, Xuechen Liu, Xin Wang, Junichi Yamagishi, Yanmin Qian

Published: 2025-06-13 07:36:31+00:00

AI Summary

This paper explores sharpness as a theoretical proxy for generalization in speech deepfake detection. By applying Sharpness-Aware Minimization (SAM), the authors improve model robustness and stability across diverse unseen datasets, demonstrating a statistically significant relationship between sharpness and generalization performance.

Abstract

Generalization remains a critical challenge in speech deepfake detection (SDD). While various approaches aim to improve robustness, generalization is typically assessed through performance metrics like equal error rate without a theoretical framework to explain model performance. This work investigates sharpness as a theoretical proxy for generalization in SDD. We analyze how sharpness responds to domain shifts and find it increases in unseen conditions, indicating higher model sensitivity. Based on this, we apply Sharpness-Aware Minimization (SAM) to reduce sharpness explicitly, leading to better and more stable performance across diverse unseen test sets. Furthermore, correlation analysis confirms a statistically significant relationship between sharpness and generalization in most test settings. These findings suggest that sharpness can serve as a theoretical indicator for generalization in SDD and that sharpness-aware training offers a promising strategy for improving robustness.


Key findings
SAM consistently improves generalization across various models and datasets, especially in high-mismatch scenarios. A statistically significant correlation exists between sharpness and generalization performance in most out-of-distribution datasets. Self-supervised learning (SSL) models generally outperform non-SSL models, with SAM further boosting their performance.
Approach
The authors investigate the relationship between model sharpness and generalization in speech deepfake detection. They utilize Sharpness-Aware Minimization (SAM) during training to explicitly reduce sharpness, thereby improving model robustness and performance on unseen datasets.
Datasets
ASVspoof 2019 LA, ASVspoof 2021 LA, ASVspoof 2021 DF, In-The-Wild (ITW), Fake-Or-Real (FOR), WaveFake (WF), ADD 2022, SpoofCeleb
Model(s)
AASIST, Wav2Vec 2.0 (Base and Large), XLS-R
Author countries
China, Japan