Individualized Deepfake Detection Exploiting Traces Due to Double Neural-Network Operations

Authors: Mushfiqur Rahman, Runze Liu, Chau-Wai Wong, Huaiyu Dai

Published: 2023-12-13 10:21:00+00:00

AI Summary

This research proposes a novel deepfake detection method that leverages the near-idempotency property of neural networks and identity conditioning. By passing an authentic image through a deepfake simulating network twice, the method exploits subtle changes to improve detection accuracy, achieving an AUC improvement from 0.92 to 0.94 and a 17% reduction in standard deviation.

Abstract

In today's digital landscape, journalists urgently require tools to verify the authenticity of facial images and videos depicting specific public figures before incorporating them into news stories. Existing deepfake detectors are not optimized for this detection task when an image is associated with a specific and identifiable individual. This study focuses on the deepfake detection of facial images of individual public figures. We propose to condition the proposed detector on the identity of an identified individual, given the advantages revealed by our theory-driven simulations. While most detectors in the literature rely on perceptible or imperceptible artifacts present in deepfake facial images, we demonstrate that the detection performance can be improved by exploiting the idempotency property of neural networks. In our approach, the training process involves double neural-network operations where we pass an authentic image through a deepfake simulating network twice. Experimental results show that the proposed method improves the area under the curve (AUC) from 0.92 to 0.94 and reduces its standard deviation by 17%. To address the need for evaluating detection performance for individual public figures, we curated and publicly released a dataset of ~32k images featuring 45 public figures, as existing deepfake datasets do not meet this criterion.


Key findings
The proposed method improved AUC from 0.92 to 0.94 and reduced its standard deviation by 17% compared to a baseline. The identity-conditioning approach significantly enhanced detection performance, especially for individual public figures. The near-idempotency property of the deepfake generation process was experimentally validated.
Approach
The approach uses a reconstruction operator (simulating a deepfake generator) and passes an image through it twice. The resulting subtle changes, analyzed using an identity-aware feature extractor and a Siamese network, indicate whether the image is a deepfake (minimal change) or authentic (significant change). Identity conditioning improves the detector's performance for specific individuals.
Datasets
A curated dataset of ~32k images featuring 45 public figures, sourced from Celeb-DF (training) and CACD (testing), was created and publicly released.
Model(s)
Siamese neural network, ResNet (pretrained for feature extraction, fine-tuned for identity-aware feature extraction), EfficientNet (baseline), Xception (baseline), autoencoder (for reconstruction operator).
Author countries
USA