The DeepSpeak Dataset

Authors: Sarah Barrington, Matyas Bohacek, Hany Farid

Published: 2024-08-09 22:29:43+00:00

AI Summary

This paper introduces DeepSpeak, a diverse and multimodal dataset comprising over 100 hours of authentic and deepfake audiovisual content. It includes real, self-recorded data from 500 diverse, consenting participants and state-of-the-art audio and visual deepfakes generated using 14 video synthesis and three voice cloning engines, employing an embedding-based identity-matching approach for realism. Large-scale evaluations demonstrate that existing state-of-the-art deepfake detectors, without retraining, fail to generalize to the DeepSpeak dataset.

Abstract

Deepfakes represent a growing concern across domains such as impostor hiring, fraud, and disinformation. Despite significant efforts to develop robust detection classifiers to distinguish the real from the fake, commonly used training datasets remain inadequate: relying on low-quality and outdated deepfake generators, consisting of content scraped from online repositories without participant consent, lacking in multimodal coverage, and rarely employing identity-matching protocols to ensure realistic fakes. To overcome these limitations, we present the DeepSpeak dataset, a diverse and multimodal dataset comprising over 100 hours of authentic and deepfake audiovisual content. We contribute: i) more than 50 hours of real, self-recorded data collected from 500 diverse and consenting participants using a custom-built data collection tool, ii) more than 50 hours of state-of-the-art audio and visual deepfakes generated using 14 video synthesis engines and three voice cloning engines, and iii) an embedding-based, identity-matching approach to ensure the creation of convincing, high-quality identity swaps that realistically simulate adversarial deepfake attacks. We also perform large-scale evaluations of state-of-the-art deepfake detectors and show that, without retraining, these detectors fail to generalize to the DeepSpeak dataset. These evaluations highlight the importance of a large and diverse dataset containing deepfakes from the latest generative-AI tools.


Key findings
Large-scale evaluations demonstrated that state-of-the-art deepfake detectors, without retraining, consistently fail to generalize to the DeepSpeak dataset for both audio and video deepfakes. This highlights the critical need for large, diverse, and up-to-date datasets reflecting the latest generative AI tools to develop robust detection methods. However, these models can achieve good performance when appropriately trained or fine-tuned on the DeepSpeak dataset.
Approach
The authors address the limitations of existing deepfake detection datasets by creating DeepSpeak, a new diverse and multimodal dataset. They collect real, self-recorded data from 500 diverse, consenting participants and generate state-of-the-art audio and visual deepfakes using 14 video synthesis and three voice cloning engines, employing an embedding-based identity-matching approach for realism. They then perform large-scale evaluations of state-of-the-art deepfake detectors on this new dataset.
Datasets
DeepSpeak, ASVSpoof, TIMIT-ElevenLabs, Celeb-DF 2, AVLips, FreqNet's custom GAN-generated dataset
Model(s)
TitaNet-L, Wav2Vec-XLSR, LAION-CLAP (with Logistic Regression and Random Forest classifiers), AASIST, RawNet2, RawGAT-ST
Author countries
USA