The DeepSpeak Dataset

Authors: Sarah Barrington, Matyas Bohacek, Hany Farid

Published: 2024-08-09 22:29:43+00:00

AI Summary

The DeepSpeak dataset is a large-scale, multimodal dataset of authentic and deepfake audio-visual content designed to address limitations in existing datasets. It includes high-quality deepfakes generated using state-of-the-art methods and employs an identity-matching approach for realism. The paper demonstrates that existing deepfake detectors fail to generalize to DeepSpeak, highlighting the need for such datasets.

Abstract

Deepfakes represent a growing concern across domains such as impostor hiring, fraud, and disinformation. Despite significant efforts to develop robust detection classifiers to distinguish the real from the fake, commonly used training datasets remain inadequate: relying on low-quality and outdated deepfake generators, consisting of content scraped from online repositories without participant consent, lacking in multimodal coverage, and rarely employing identity-matching protocols to ensure realistic fakes. To overcome these limitations, we present the DeepSpeak dataset, a diverse and multimodal dataset comprising over 100 hours of authentic and deepfake audiovisual content. We contribute: i) more than 50 hours of real, self-recorded data collected from 500 diverse and consenting participants using a custom-built data collection tool, ii) more than 50 hours of state-of-the-art audio and visual deepfakes generated using 14 video synthesis engines and three voice cloning engines, and iii) an embedding-based, identity-matching approach to ensure the creation of convincing, high-quality identity swaps that realistically simulate adversarial deepfake attacks. We also perform large-scale evaluations of state-of-the-art deepfake detectors and show that, without retraining, these detectors fail to generalize to the DeepSpeak dataset. These evaluations highlight the importance of a large and diverse dataset containing deepfakes from the latest generative-AI tools.


Key findings
State-of-the-art deepfake detectors, trained on other datasets, performed poorly on the DeepSpeak dataset without retraining. This highlights the need for larger, more diverse datasets that reflect the latest deepfake generation techniques. Retraining or fine-tuning on DeepSpeak improved performance, but often at the cost of performance on the original datasets.
Approach
The authors created the DeepSpeak dataset by collecting real audio-visual data from 500 consenting participants and generating deepfakes using various state-of-the-art audio and video synthesis engines. An embedding-based identity-matching approach was used to create more realistic deepfakes.
Datasets
DeepSpeak dataset (created by the authors), TIMIT, Harvard Sentences, Speech Accent Archive, ASVSpoof, Celeb-DF 2, AVLips, VoxCeleb, MEAD, RAVDESS, AAHQ, VoxCeleb2, LRW, HDTF, VFHQ, CelebV-HQ, MultiTalk
Model(s)
FreqNet, GenConViT (ED and VAE variants), LipFD, TitaNet, Wav2Vec-XLSR, LAION-CLAP, AASIST, RawNet2, RawGAT-ST
Author countries
USA