OpenFake: An Open Dataset and Platform Toward Large-Scale Deepfake Detection

Authors: Victor Livernoche, Akshatha Arodi, Andreea Musulan, Zachary Yang, Adam Salvail, Gaétan Marceau Caron, Jean-François Godbout, Reihaneh Rabbany

Published: 2025-09-11 14:34:22+00:00

AI Summary

This paper introduces OPENFAKE, a large-scale, politically-focused dataset for deepfake detection, containing 3 million real images and 963k high-quality synthetic images generated by various models. It also presents OPENFAKE ARENA, a crowdsourced adversarial platform to continuously update the dataset and ensure its relevance.

Abstract

Deepfakes, synthetic media created using advanced AI techniques, have intensified the spread of misinformation, particularly in politically sensitive contexts. Existing deepfake detection datasets are often limited, relying on outdated generation methods, low realism, or single-face imagery, restricting the effectiveness for general synthetic image detection. By analyzing social media posts, we identify multiple modalities through which deepfakes propagate misinformation. Furthermore, our human perception study demonstrates that recently developed proprietary models produce synthetic images increasingly indistinguishable from real ones, complicating accurate identification by the general public. Consequently, we present a comprehensive, politically-focused dataset specifically crafted for benchmarking detection against modern generative models. This dataset contains three million real images paired with descriptive captions, which are used for generating 963k corresponding high-quality synthetic images from a mix of proprietary and open-source models. Recognizing the continual evolution of generative techniques, we introduce an innovative crowdsourced adversarial platform, where participants are incentivized to generate and submit challenging synthetic images. This ongoing community-driven initiative ensures that deepfake detection methods remain robust and adaptive, proactively safeguarding public discourse from sophisticated misinformation threats.


Key findings
Human participants struggled to identify deepfakes generated by modern proprietary models like Imagen 3 and GPT Image 1. Detectors trained on OPENFAKE outperformed those trained on older datasets, particularly when using compression-robust augmentations. The results highlight the need for continuously updated datasets to keep pace with advancements in deepfake generation.
Approach
The authors created a dataset by pairing real images from LAION-400M (filtered for political relevance) with synthetic images generated using various state-of-the-art models. They also developed a crowdsourced platform, OPENFAKE ARENA, to continuously generate and add challenging synthetic images to the dataset.
Datasets
LAION-400M, various deepfake datasets mentioned in the related work (e.g., FaceForensics++, Celeb-DF, DFDC, ForgeryNet, OpenForensics, FFIW, Fake2M, DiffusionForensics, GenImage, TWIGMA, DiffusionDeepfake, DF40, DiffusionFace, DiFF, Semi-Truths)
Model(s)
SwinV2, CLIP-D-10k+, Corvi2023, InternVL, ConvNeXt, EfficientNet-B4; for generation: Stable Diffusion (versions 1.5, 2.1, XL, 3.5), Flux (versions 1.0-dev, 1.1-Pro, Schnell), Midjourney (versions 6, 7), DALL-E 3, Imagen (versions 3, 4), GPT Image 1, Ideogram 3.0, Grok-2, HiDream-I1, Recraft v3, Chroma, and 10 community variants of Stable Diffusion and Flux.
Author countries
Canada