Through the Lens: Benchmarking Deepfake Detectors Against Moiré-Induced Distortions

Authors: Razaib Tariq, Minji Heo, Simon S. Woo, Shahroz Tariq

Published: 2025-10-27 11:23:04+00:00

Comment: 48 Pages, 29 Figures, 15 Tables

AI Summary

This study systematically evaluates state-of-the-art deepfake detectors against Moiré-induced distortions, a common artifact in real-world smartphone-captured media from digital screens. They introduce the DeepMoiréFake (DMF) dataset, comprising 12,832 videos collected under diverse conditions, and find that Moiré artifacts significantly degrade detector performance. Surprisingly, demoiréing methods, intended to mitigate these artifacts, often exacerbate the problem.

Abstract

Deepfake detection remains a pressing challenge, particularly in real-world settings where smartphone-captured media from digital screens often introduces Moiré artifacts that can distort detection outcomes. This study systematically evaluates state-of-the-art (SOTA) deepfake detectors on Moiré-affected videos, an issue that has received little attention. We collected a dataset of 12,832 videos, spanning 35.64 hours, from the Celeb-DF, DFD, DFDC, UADFV, and FF++ datasets, capturing footage under diverse real-world conditions, including varying screens, smartphones, lighting setups, and camera angles. To further examine the influence of Moiré patterns on deepfake detection, we conducted additional experiments using our DeepMoiréFake, referred to as (DMF) dataset and two synthetic Moiré generation techniques. Across 15 top-performing detectors, our results show that Moiré artifacts degrade performance by as much as 25.4%, while synthetically generated Moiré patterns lead to a 21.4% drop in accuracy. Surprisingly, demoiréing methods, intended as a mitigation approach, instead worsened the problem, reducing accuracy by up to 17.2%. These findings underscore the urgent need for detection models that can robustly handle Moiré distortions alongside other realworld challenges, such as compression, sharpening, and blurring. By introducing the DMF dataset, we aim to drive future research toward closing the gap between controlled experiments and practical deepfake detection.


Key findings
Moiré artifacts degrade deepfake detector performance by up to 25.4%, with synthetically generated Moiré patterns causing a 21.4% accuracy drop. Counterintuitively, demoiréing methods designed to remove these patterns further reduced detection accuracy by up to 17.2%. These findings highlight a critical vulnerability in current deepfake detection models against real-world distortions and emphasize the need for robust detectors capable of handling such challenges.
Approach
The authors created a new dataset (DeepMoiréFake - DMF) by recapturing videos from existing deepfake datasets (Celeb-DF, DFD, DFDC, UADFV, FF++) using smartphones from various screens and lighting conditions, thereby introducing Moiré patterns. They then benchmarked 15 state-of-the-art deepfake detectors on this Moiré-affected data, including authentic and synthetically generated Moiré patterns, and also evaluated the impact of demoiréing techniques.
Datasets
Celeb-DF, DFD, DFDC, UADFV, FF++, DeepMoiréFake (DMF)
Model(s)
SelfBlended, Rossler, ForgeryNet, Capsule-Forensics (Capsule), MAT, CADDM, CCViT, ADD, AltFreezing, FTCN, LRNet (BlazeFace and RetinaFace variants), LipForensics. Demoiréing methods included DMCNN, MBCNN, ESDNet, DDA, VD-Moiré, FPANet, and NAFNet (for denoising/deblurring).
Author countries
South Korea, Australia