Deepfake Word Detection by Next-token Prediction using Fine-tuned Whisper
Authors: Hoan My Tran, Xin Wang, Wanying Ge, Xuechen Liu, Junichi Yamagishi
Published: 2026-02-26 06:17:56+00:00
AI Summary
This paper proposes a cost-effective method to detect synthetic words within deepfake speech utterances by fine-tuning a pre-trained Whisper model. The approach integrates synthetic word detection into the speech transcription task via next-token prediction, avoiding significant architectural changes. Experiments demonstrate that the fine-tuned Whisper achieves low synthetic-word and transcription error rates on in-domain data, performing comparably to a dedicated ResNet-based model, though generalization to out-of-domain data needs improvement.
Abstract
Deepfake speech utterances can be forged by replacing one or more words in a bona fide utterance with semantically different words synthesized by speech generative models. While a dedicated synthetic word detector could be developed, we investigate a cost-effective method that fine-tunes a pre-trained Whisper model to detect synthetic words while transcribing the input utterance via next-token prediction. We further investigate using partially vocoded utterances as the fine-tuning data, thereby reducing the cost of data collection. Our experiments demonstrate that, on in-domain test data, the fine-tuned Whisper yields low synthetic-word detection error rates and transcription error rates. On out-of-domain test data with synthetic words produced by unseen speech generative models, the fine-tuned Whisper remains on par with a dedicated ResNet-based detection model; however, the overall performance degradation calls for strategies to improve its generalization capability.