The DeepSpeak Dataset
Authors: Sarah Barrington, Matyas Bohacek, Hany Farid
Published: 2024-08-09 22:29:43+00:00
AI Summary
The DeepSpeak dataset is a large-scale, multimodal dataset of authentic and deepfake audio-visual content designed to address limitations in existing datasets. It includes high-quality deepfakes generated using state-of-the-art methods and employs an identity-matching approach for realism. The paper demonstrates that existing deepfake detectors fail to generalize to DeepSpeak, highlighting the need for such datasets.
Abstract
Deepfakes represent a growing concern across domains such as impostor hiring, fraud, and disinformation. Despite significant efforts to develop robust detection classifiers to distinguish the real from the fake, commonly used training datasets remain inadequate: relying on low-quality and outdated deepfake generators, consisting of content scraped from online repositories without participant consent, lacking in multimodal coverage, and rarely employing identity-matching protocols to ensure realistic fakes. To overcome these limitations, we present the DeepSpeak dataset, a diverse and multimodal dataset comprising over 100 hours of authentic and deepfake audiovisual content. We contribute: i) more than 50 hours of real, self-recorded data collected from 500 diverse and consenting participants using a custom-built data collection tool, ii) more than 50 hours of state-of-the-art audio and visual deepfakes generated using 14 video synthesis engines and three voice cloning engines, and iii) an embedding-based, identity-matching approach to ensure the creation of convincing, high-quality identity swaps that realistically simulate adversarial deepfake attacks. We also perform large-scale evaluations of state-of-the-art deepfake detectors and show that, without retraining, these detectors fail to generalize to the DeepSpeak dataset. These evaluations highlight the importance of a large and diverse dataset containing deepfakes from the latest generative-AI tools.