Enhancing Abnormality Identification: Robust Out-of-Distribution Strategies for Deepfake Detection
Authors: Luca Maiano, Fabrizio Casadei, Irene Amerini
Published: 2025-06-03 13:24:33+00:00
AI Summary
This paper proposes two novel Out-Of-Distribution (OOD) detection approaches for deepfake detection, one based on image reconstruction and the other incorporating an attention mechanism. These approaches are shown to be effective in identifying deepfakes and out-of-distribution samples, outperforming existing state-of-the-art techniques on benchmark datasets.
Abstract
Detecting deepfakes has become a critical challenge in Computer Vision and Artificial Intelligence. Despite significant progress in detection techniques, generalizing them to open-set scenarios continues to be a persistent difficulty. Neural networks are often trained on the closed-world assumption, but with new generative models constantly evolving, it is inevitable to encounter data generated by models that are not part of the training distribution. To address these challenges, in this paper, we propose two novel Out-Of-Distribution (OOD) detection approaches. The first approach is trained to reconstruct the input image, while the second incorporates an attention mechanism for detecting OODs. Our experiments validate the effectiveness of the proposed approaches compared to existing state-of-the-art techniques. Our method achieves promising results in deepfake detection and ranks among the top-performing configurations on the benchmark, demonstrating their potential for robust, adaptable solutions in dynamic, real-world applications.