Forensic deepfake audio detection using segmental speech features

Authors: Tianle Yang, Chengzhe Sun, Siwei Lyu, Phil Rose

Published: 2025-05-20 02:42:46+00:00

AI Summary

This research investigates the use of segmental speech features, specifically vowel formants, for deepfake audio detection. The study finds that these features, linked to human articulation, are more effective at identifying deepfakes than global features commonly used in forensic voice comparison, highlighting the need for distinct approaches in deepfake detection.

Abstract

This study explores the potential of using acoustic features of segmental speech sounds to detect deepfake audio. These features are highly interpretable because of their close relationship with human articulatory processes and are expected to be more difficult for deepfake models to replicate. The results demonstrate that certain segmental features commonly used in forensic voice comparison (FVC) are effective in identifying deep-fakes, whereas some global features provide little value. These findings underscore the need to approach audio deepfake detection using methods that are distinct from those employed in traditional FVC, and offer a new perspective on leveraging segmental features for this purpose.


Key findings
Segmental features, particularly vowel formants, significantly outperformed global features (LTFD, LTF0, MFCCs) in deepfake detection, achieving lower Cllr values. This indicates that deepfake models struggle to accurately replicate the fine-grained details of segmental speech characteristics. The results highlight the importance of interpretable features for forensic applications.
Approach
The researchers extracted segmental features (vowel formants, long-term fundamental frequency, long-term formant distribution, MFCCs) from real and deepfake audio generated using ElevenLabs and Parrot AI. They then used Gaussian Mixture Models to compute likelihood ratios and evaluate the performance using Cllr and EER metrics.
Datasets
LJ Speech, M-AILABS Speech Dataset, and YouTube recordings of native US English speakers.
Model(s)
Gaussian Mixture Models (GMMs)
Author countries
USA, Australia