Human perception of audio deepfakes: the role of language and speaking style
Authors: Eugenia San Segundo, Aurora López-Jareño, Xin Wang, Junichi Yamagishi
Published: 2025-12-10 01:04:59+00:00
Comment: Submitted to Speech Communication
AI Summary
This study investigates human perception of audio deepfakes, exploring how language, speaking style, and voice familiarity influence detection accuracy and the underlying reasons for listeners' judgments. Through a perceptual experiment with Spanish and Japanese native speakers, the research reveals an average accuracy of 59.11%, with higher performance on authentic samples. It highlights that listeners primarily rely on suprasegmental and higher-level linguistic or extralinguistic characteristics for detection, with observable cross-linguistic differences in perceptual strategies.
Abstract
Audio deepfakes have reached a level of realism that makes it increasingly difficult to distinguish between human and artificial voices, which poses risks such as identity theft or spread of disinformation. Despite these concerns, research on humans' ability to identify deepfakes is limited, with most studies focusing on English and very few exploring the reasons behind listeners' perceptual decisions. This study addresses this gap through a perceptual experiment in which 54 listeners (28 native Spanish speakers and 26 native Japanese speakers) classified voices as natural or synthetic, and justified their choices. The experiment included 80 stimuli (50% artificial), organized according to three variables: language (Spanish/Japanese), speech style (audiobooks/interviews), and familiarity with the voice (familiar/unfamiliar). The goal was to examine how these variables influence detection and to analyze qualitatively the reasoning behind listeners' perceptual decisions. Results indicate an average accuracy of 59.11%, with higher performance on authentic samples. Judgments of vocal naturalness rely on a combination of linguistic and non-linguistic cues. Comparing Japanese and Spanish listeners, our qualitative analysis further reveals both shared cues and notable cross-linguistic differences in how listeners conceptualize the humanness of speech. Overall, participants relied primarily on suprasegmental and higher-level or extralinguistic characteristics - such as intonation, rhythm, fluency, pauses, speed, breathing, and laughter - over segmental features. These findings underscore the complexity of human perceptual strategies in distinguishing natural from artificial speech and align partly with prior research emphasizing the importance of prosody and phenomena typical of spontaneous speech, such as disfluencies.