Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024

Authors: Nuria Alina Chandra, Ryan Murtfeldt, Lin Qiu, Arnab Karmakar, Hannah Lee, Emmanuel Tanumihardja, Kevin Farhat, Ben Caffee, Sejin Paik, Changyeon Lee, Jongwook Choi, Aerin Kim, Oren Etzioni

Published: 2025-03-04 18:33:22+00:00

AI Summary

This paper introduces Deepfake-Eval-2024, a new benchmark dataset of in-the-wild deepfakes collected in 2024 from social media and deepfake detection platforms. The dataset demonstrates a significant drop in accuracy of state-of-the-art deepfake detection models compared to previous benchmarks, highlighting the need for more representative real-world datasets.

Abstract

In the age of increasingly realistic generative AI, robust deepfake detection is essential for mitigating fraud and disinformation. While many deepfake detectors report high accuracy on academic datasets, we show that these academic benchmarks are out of date and not representative of real-world deepfakes. We introduce Deepfake-Eval-2024, a new deepfake detection benchmark consisting of in-the-wild deepfakes collected from social media and deepfake detection platform users in 2024. Deepfake-Eval-2024 consists of 45 hours of videos, 56.5 hours of audio, and 1,975 images, encompassing the latest manipulation technologies. The benchmark contains diverse media content from 88 different websites in 52 different languages. We find that the performance of open-source state-of-the-art deepfake detection models drops precipitously when evaluated on Deepfake-Eval-2024, with AUC decreasing by 50% for video, 48% for audio, and 45% for image models compared to previous benchmarks. We also evaluate commercial deepfake detection models and models finetuned on Deepfake-Eval-2024, and find that they have superior performance to off-the-shelf open-source models, but do not yet reach the accuracy of deepfake forensic analysts. The dataset is available at https://github.com/nuriachandra/Deepfake-Eval-2024.


Key findings
Open-source deepfake detection models showed a significant drop in AUC (50% for video, 48% for audio, 45% for image) on Deepfake-Eval-2024 compared to previous benchmarks. Commercial models performed better but still fell short of human expert accuracy. Finetuning improved open-source model performance, suggesting the importance of representative training data.
Approach
The authors created Deepfake-Eval-2024 by collecting in-the-wild deepfakes flagged by users on social media and a deepfake detection platform. They evaluated various open-source and commercial deepfake detection models on this dataset, analyzing their performance and identifying challenges in real-world deepfake detection.
Datasets
Deepfake-Eval-2024 (video, audio, images); various previously published datasets for comparison (FaceForensics++, ForgeryNet, ASVspoof2019, etc.)
Model(s)
GenConViT, FTCN, Styleflow, AASIST, RawNet2, P3, UFD, DistilDIRE, NPR; various commercial deepfake detection models (names anonymized)
Author countries
USA, South Korea