Collecting, Curating, and Annotating Good Quality Speech deepfake dataset for Famous Figures: Process and Challenges

Authors: Hashim Ali, Surya Subramani, Raksha Varahamurthy, Nithin Adupa, Lekha Bollinani, Hafiz Malik

Published: 2025-06-30 23:41:04+00:00

AI Summary

This paper details a methodology for creating a high-quality speech deepfake dataset of ten public figures. The approach uses an automated pipeline for collecting and curating real speech, incorporating transcription-based segmentation to improve synthetic speech quality generated using various TTS methods.

Abstract

Recent advances in speech synthesis have introduced unprecedented challenges in maintaining voice authenticity, particularly concerning public figures who are frequent targets of impersonation attacks. This paper presents a comprehensive methodology for collecting, curating, and generating synthetic speech data for political figures and a detailed analysis of challenges encountered. We introduce a systematic approach incorporating an automated pipeline for collecting high-quality bonafide speech samples, featuring transcription-based segmentation that significantly improves synthetic speech quality. We experimented with various synthesis approaches; from single-speaker to zero-shot synthesis, and documented the evolution of our methodology. The resulting dataset comprises bonafide and synthetic speech samples from ten public figures, demonstrating superior quality with a NISQA-TTS naturalness score of 3.69 and the highest human misclassification rate of 61.9%.


Key findings
The resulting dataset achieved a high NISQA-TTS naturalness score of 3.69 and a human misclassification rate of 61.9%, indicating that the synthetic speech is highly realistic and challenging to detect. The transcription-based segmentation significantly improved the quality of the synthetic speech compared to previous approaches.
Approach
The authors developed an automated pipeline to collect high-quality real speech samples from public figures. This involved speaker diarization, transcription, and transcription-based segmentation to create segments for synthetic speech generation using various single-speaker, few-shot, and zero-shot TTS models. The resulting dataset contains both real and synthetic speech.
Datasets
YouTube videos of ten public figures (Anthony Blinken, Barack Obama, Donald Trump, JD Vance, Joe Biden, Kamala Harris, Mathew Miller, Tim Walz, Vivek Ramaswamy, and Elon Musk), VoxCeleb1 (mentioned in related work), ASVspoof (mentioned in related work), DFADD (mentioned in related work), CodecFake (mentioned in related work), SpoofCeleb (mentioned in related work), In-The-Wild (mentioned in related work), and Amazon Audible (used for initial experiments).
Model(s)
StyleTTS2, XTTSv2, F5TTS, E2TTS, FishSpeech, SSRSpeech, MaskGCT, CozyVoice2, LLASA, Zonosv0.1, Tacotron-Capacitron (mentioned), GlowTTS (mentioned), HiFi-GAN (mentioned), UnivNet (mentioned), NISQA-TTS (for evaluation), AssemblyAI (for diarization), OpenAI Whisper Large Turbo (for transcription).
Author countries
USA