Rethinking Individual Fairness in Deepfake Detection
Authors: Aryana Hou, Li Lin, Justin Li, Shu Hu
Published: 2025-07-18 19:04:47+00:00
AI Summary
This research paper identifies the failure of the traditional individual fairness principle in deepfake detection due to the high semantic similarity between real and fake images. To address this, the authors propose a novel framework that integrates into existing deepfake detectors, improving individual fairness and generalization without compromising detection performance.
Abstract
Generative AI models have substantially improved the realism of synthetic media, yet their misuse through sophisticated DeepFakes poses significant risks. Despite recent advances in deepfake detection, fairness remains inadequately addressed, enabling deepfake markers to exploit biases against specific populations. While previous studies have emphasized group-level fairness, individual fairness (i.e., ensuring similar predictions for similar individuals) remains largely unexplored. In this work, we identify for the first time that the original principle of individual fairness fundamentally fails in the context of deepfake detection, revealing a critical gap previously unexplored in the literature. To mitigate it, we propose the first generalizable framework that can be integrated into existing deepfake detectors to enhance individual fairness and generalization. Extensive experiments conducted on leading deepfake datasets demonstrate that our approach significantly improves individual fairness while maintaining robust detection performance, outperforming state-of-the-art methods. The code is available at https://github.com/Purdue-M2/Individual-Fairness-Deepfake-Detection.