Practical Manipulation Model for Robust Deepfake Detection
Authors: Benedikt Hopf, Radu Timofte
Published: 2025-06-05 15:06:16+00:00
AI Summary
This paper introduces a Practical Manipulation Model (PMM) for robust deepfake detection, improving upon existing methods' vulnerability to non-ideal conditions. PMM enhances the diversity of pseudo-fake training data by incorporating Poisson blending, diverse masks, generator artifacts, and image degradations, leading to significantly improved robustness and benchmark performance.
Abstract
Modern deepfake detection models have achieved strong performance even on the challenging cross-dataset task. However, detection performance under non-ideal conditions remains very unstable, limiting success on some benchmark datasets and making it easy to circumvent detection. Inspired by the move to a more real-world degradation model in the area of image super-resolution, we have developed a Practical Manipulation Model (PMM) that covers a larger set of possible forgeries. We extend the space of pseudo-fakes by using Poisson blending, more diverse masks, generator artifacts, and distractors. Additionally, we improve the detectors' generality and robustness by adding strong degradations to the training images. We demonstrate that these changes not only significantly enhance the model's robustness to common image degradations but also improve performance on standard benchmark datasets. Specifically, we show clear increases of $3.51%$ and $6.21%$ AUC on the DFDC and DFDCP datasets, respectively, over the s-o-t-a LAA backbone. Furthermore, we highlight the lack of robustness in previous detectors and our improvements in this regard. Code can be found at https://github.com/BenediktHopf/PMM