Gender Fairness in Audio Deepfake Detection: Performance and Disparity Analysis
Authors: Aishwarya Fursule, Shruti Kshirsagar, Anderson R. Avila
Published: 2026-03-09 22:52:12+00:00
Comment: 6 pages, 3 Figures
AI Summary
This paper conducts a thorough analysis of gender fairness in audio deepfake detection models, an area previously underexplored. The study uses the ASVspoof 5 dataset, training a ResNet-18 classifier with various audio features and comparing it against the baseline AASIST model. It incorporates five established fairness metrics alongside conventional Equal Error Rate (EER) to quantify and understand gender-dependent performance disparities.
Abstract
Audio deepfake detection aims to detect real human voices from those generated by Artificial Intelligence (AI) and has emerged as a significant problem in the field of voice biometrics systems. With the ever-improving quality of synthetic voice, the probability of such a voice being exploited for illicit practices like identity thest and impersonation increases. Although significant progress has been made in the field of Audio Deepfake Detection in recent times, the issue of gender bias remains underexplored and in its nascent stage In this paper, we have attempted a thorough analysis of gender dependent performance and fairness in audio deepfake detection models. We have used the ASVspoof 5 dataset and train a ResNet-18 classifier and evaluate detection performance across four different audio features, and compared the performance with baseline AASIST model. Beyond conventional metrics such as Equal Error Rate (EER %), we incorporated five established fairness metrics to quantify gender disparities in the model. Our results show that even when the overall EER difference between genders appears low, fairness-aware evaluation reveals disparities in error distribution that are obscured by aggregate performance measures. These findings demonstrate that reliance on standard metrics is unreliable, whereas fairness metrics provide critical insights into demographic-specific failure modes. This work highlights the importance of fairness-aware evaluation for developing a more equitable, robust, and trustworthy audio deepfake detection system.