TIMIT-TTS: a Text-to-Speech Dataset for Multimodal Synthetic Media Detection

Authors: Davide Salvi, Brian Hosler, Paolo Bestagini, Matthew C. Stamm, Stefano Tubaro

Published: 2022-09-16 15:27:35+00:00

AI Summary

This paper introduces TIMIT-TTS, a new audio-visual deepfake dataset generated using a novel pipeline. The pipeline synthesizes realistic speech tracks for existing video deepfake datasets using Text-to-Speech (TTS) and Dynamic Time Warping, creating multimodal deepfake data for research.

Abstract

With the rapid development of deep learning techniques, the generation and counterfeiting of multimedia material are becoming increasingly straightforward to perform. At the same time, sharing fake content on the web has become so simple that malicious users can create unpleasant situations with minimal effort. Also, forged media are getting more and more complex, with manipulated videos that are taking the scene over still images. The multimedia forensic community has addressed the possible threats that this situation could imply by developing detectors that verify the authenticity of multimedia objects. However, the vast majority of these tools only analyze one modality at a time. This was not a problem as long as still images were considered the most widely edited media, but now, since manipulated videos are becoming customary, performing monomodal analyses could be reductive. Nonetheless, there is a lack in the literature regarding multimodal detectors, mainly due to the scarsity of datasets containing forged multimodal data to train and test the designed algorithms. In this paper we focus on the generation of an audio-visual deepfake dataset. First, we present a general pipeline for synthesizing speech deepfake content from a given real or fake video, facilitating the creation of counterfeit multimodal material. The proposed method uses Text-to-Speech (TTS) and Dynamic Time Warping techniques to achieve realistic speech tracks. Then, we use the pipeline to generate and release TIMIT-TTS, a synthetic speech dataset containing the most cutting-edge methods in the TTS field. This can be used as a standalone audio dataset, or combined with other state-of-the-art sets to perform multimodal research. Finally, we present numerous experiments to benchmark the proposed dataset in both mono and multimodal conditions, showing the need for multimodal forensic detectors and more suitable data.


Key findings
Multimodal analysis significantly improves deepfake detection performance compared to monomodal approaches, particularly with post-processed data. The TIMIT-TTS dataset proves challenging for audio deepfake detection and attribution, highlighting the need for more robust and multimodal deepfake detection methods.
Approach
The authors propose a pipeline that uses Text-to-Speech (TTS) and Dynamic Time Warping (DTW) to generate synthetic speech tracks synchronized with existing video deepfake data. This creates a multimodal dataset with both audio and video components that are potentially deepfakes.
Datasets
VidTIMIT, DeepfakeTIMIT, TIMIT Corpus, LibriSpeech, LJSpeech, VCTK
Model(s)
RawNet2 (audio deepfake detection), EfficientNetB4 with attention layers (video deepfake detection), multiple TTS models (Tacotron, Tacotron2, GlowTTS, FastSpeech2, FastPitch, TalkNet, MixerTTS, MixerTTS-X, VITS, SpeedySpeech, gTTS, Silero) and MelGAN, WaveRNN vocoders
Author countries
Italy, USA