Deepfake Word Detection by Next-token Prediction using Fine-tuned Whisper

Authors: Hoan My Tran, Xin Wang, Wanying Ge, Xuechen Liu, Junichi Yamagishi

Published: 2026-02-26 06:17:56+00:00

AI Summary

This paper proposes a cost-effective method to detect synthetic words within deepfake speech utterances by fine-tuning a pre-trained Whisper model. The approach integrates synthetic word detection into the speech transcription task via next-token prediction, avoiding significant architectural changes. Experiments demonstrate that the fine-tuned Whisper achieves low synthetic-word and transcription error rates on in-domain data, performing comparably to a dedicated ResNet-based model, though generalization to out-of-domain data needs improvement.

Abstract

Deepfake speech utterances can be forged by replacing one or more words in a bona fide utterance with semantically different words synthesized by speech generative models. While a dedicated synthetic word detector could be developed, we investigate a cost-effective method that fine-tunes a pre-trained Whisper model to detect synthetic words while transcribing the input utterance via next-token prediction. We further investigate using partially vocoded utterances as the fine-tuning data, thereby reducing the cost of data collection. Our experiments demonstrate that, on in-domain test data, the fine-tuned Whisper yields low synthetic-word detection error rates and transcription error rates. On out-of-domain test data with synthetic words produced by unseen speech generative models, the fine-tuned Whisper remains on par with a dedicated ResNet-based detection model; however, the overall performance degradation calls for strategies to improve its generalization capability.


Key findings
On in-domain test data, the fine-tuned Whisper achieved low synthetic-word detection error rates and preserved transcription accuracy, performing on par with a dedicated ResNet-based detection model. However, performance significantly degraded on out-of-domain test data with unseen speech generative models, highlighting a need for strategies to improve generalization capabilities. Using vocoded data for fine-tuning did not fully alleviate degradation when synthetic words were from different domains or unseen synthesizers.
Approach
The authors fine-tune a pre-trained Whisper ASR model to detect synthetic words by augmenting the text token sequences with special markers (<TOF>, <EOF>) around synthetic words. This transforms the detection task into a next-token prediction problem during transcription, requiring no architectural changes. They also investigate using partially vocoded utterances for fine-tuning data to reduce data collection costs.
Datasets
MLS, LlamaPartialSpoof, AV-Deepfake 1M, PartialEdit
Model(s)
Whisper Large (v3), ResNet152
Author countries
France, Japan