Rethinking Individual Fairness in Deepfake Detection

Authors: Aryana Hou, Li Lin, Justin Li, Shu Hu

Published: 2025-07-18 19:04:47+00:00

AI Summary

This research paper identifies the failure of the traditional individual fairness principle in deepfake detection due to the high semantic similarity between real and fake images. To address this, the authors propose a novel framework that integrates into existing deepfake detectors, improving individual fairness and generalization without compromising detection performance.

Abstract

Generative AI models have substantially improved the realism of synthetic media, yet their misuse through sophisticated DeepFakes poses significant risks. Despite recent advances in deepfake detection, fairness remains inadequately addressed, enabling deepfake markers to exploit biases against specific populations. While previous studies have emphasized group-level fairness, individual fairness (i.e., ensuring similar predictions for similar individuals) remains largely unexplored. In this work, we identify for the first time that the original principle of individual fairness fundamentally fails in the context of deepfake detection, revealing a critical gap previously unexplored in the literature. To mitigate it, we propose the first generalizable framework that can be integrated into existing deepfake detectors to enhance individual fairness and generalization. Extensive experiments conducted on leading deepfake datasets demonstrate that our approach significantly improves individual fairness while maintaining robust detection performance, outperforming state-of-the-art methods. The code is available at https://github.com/Purdue-M2/Individual-Fairness-Deepfake-Detection.


Key findings
The proposed method significantly improves individual fairness while maintaining robust detection performance across multiple datasets and model architectures. It outperforms state-of-the-art methods in both intra-domain and cross-domain evaluations, demonstrating its generalizability and effectiveness.
Approach
The proposed framework uses anchor-based learning to focus on manipulation-specific features, and a semantic-agnostic approach (patch shuffling, denoising, frequency transformation) to expose forgery artifacts while mitigating semantic bias. Sharpness-aware minimization is used to improve generalization.
Datasets
FaceForensics++ (FF++), Deepfake Detection (DFD), Deepfake Detection Challenge (DFDC), Celeb-DF, AI-Face
Model(s)
Xception, ResNet-50, EfficientNet-B3 (and several state-of-the-art deepfake detectors are used for comparative analysis)
Author countries
United States