Human perception of audio deepfakes: the role of language and speaking style

Authors: Eugenia San Segundo, Aurora López-Jareño, Xin Wang, Junichi Yamagishi

Published: 2025-12-10 01:04:59+00:00

Comment: Submitted to Speech Communication

AI Summary

This study investigates human perception of audio deepfakes, exploring how language, speaking style, and voice familiarity influence detection accuracy and the underlying reasons for listeners' judgments. Through a perceptual experiment with Spanish and Japanese native speakers, the research reveals an average accuracy of 59.11%, with higher performance on authentic samples. It highlights that listeners primarily rely on suprasegmental and higher-level linguistic or extralinguistic characteristics for detection, with observable cross-linguistic differences in perceptual strategies.

Abstract

Audio deepfakes have reached a level of realism that makes it increasingly difficult to distinguish between human and artificial voices, which poses risks such as identity theft or spread of disinformation. Despite these concerns, research on humans' ability to identify deepfakes is limited, with most studies focusing on English and very few exploring the reasons behind listeners' perceptual decisions. This study addresses this gap through a perceptual experiment in which 54 listeners (28 native Spanish speakers and 26 native Japanese speakers) classified voices as natural or synthetic, and justified their choices. The experiment included 80 stimuli (50% artificial), organized according to three variables: language (Spanish/Japanese), speech style (audiobooks/interviews), and familiarity with the voice (familiar/unfamiliar). The goal was to examine how these variables influence detection and to analyze qualitatively the reasoning behind listeners' perceptual decisions. Results indicate an average accuracy of 59.11%, with higher performance on authentic samples. Judgments of vocal naturalness rely on a combination of linguistic and non-linguistic cues. Comparing Japanese and Spanish listeners, our qualitative analysis further reveals both shared cues and notable cross-linguistic differences in how listeners conceptualize the humanness of speech. Overall, participants relied primarily on suprasegmental and higher-level or extralinguistic characteristics - such as intonation, rhythm, fluency, pauses, speed, breathing, and laughter - over segmental features. These findings underscore the complexity of human perceptual strategies in distinguishing natural from artificial speech and align partly with prior research emphasizing the importance of prosody and phenomena typical of spontaneous speech, such as disfluencies.


Key findings
Listeners achieved an average accuracy of 59.11%, performing better on authentic samples. Judgments relied predominantly on suprasegmental and higher-level or extralinguistic cues such as intonation, rhythm, fluency, pauses, speed, breathing, and laughter, rather than segmental features. While both groups shared some cues, notable cross-linguistic differences emerged, with Spanish listeners, particularly those with linguistic expertise, also referring to segmental details and technical phonetic terms.
Approach
The researchers conducted a perceptual experiment with 54 human listeners (native Spanish and Japanese speakers) who classified 80 audio stimuli (50% artificial) as natural or synthetic. The stimuli varied by language (Spanish/Japanese), speech style (audiobooks/interviews), and familiarity with the voice. A qualitative analysis of listeners' open-ended justifications for their decisions was also performed to understand the cues they attended to.
Datasets
LibriVox, YouTube (for audiobooks), VoxCeleb-ESP (for Spanish celebrity interviews), EACELEB (for Japanese celebrity interviews)
Model(s)
UNKNOWN
Author countries
Spain, Japan