FakeTracer: Catching Face-swap DeepFakes via Implanting Traces in Training

Authors: Pu Sun, Honggang Qi, Yuezun Li, Siwei Lyu

Published: 2023-07-27 02:36:13+00:00

AI Summary

FakeTracer is a proactive defense method against face-swap deepfakes that implants imperceptible traces into training data. These traces, sustainable and erasable, affect the deepfake model's learning, resulting in generated faces containing detectable markers indicating forgery.

Abstract

Face-swap DeepFake is an emerging AI-based face forgery technique that can replace the original face in a video with a generated face of the target identity while retaining consistent facial attributes such as expression and orientation. Due to the high privacy of faces, the misuse of this technique can raise severe social concerns, drawing tremendous attention to defend against DeepFakes recently. In this paper, we describe a new proactive defense method called FakeTracer to expose face-swap DeepFakes via implanting traces in training. Compared to general face-synthesis DeepFake, the face-swap DeepFake is more complex as it involves identity change, is subjected to the encoding-decoding process, and is trained unsupervised, increasing the difficulty of implanting traces into the training phase. To effectively defend against face-swap DeepFake, we design two types of traces, sustainable trace (STrace) and erasable trace (ETrace), to be added to training faces. During the training, these manipulated faces affect the learning of the face-swap DeepFake model, enabling it to generate faces that only contain sustainable traces. In light of these two traces, our method can effectively expose DeepFakes by identifying them. Extensive experiments corroborate the efficacy of our method on defending against face-swap DeepFake.


Key findings
The method achieves high bit accuracy (over 90%) in recovering implanted traces from generated deepfakes. It demonstrates competitive detection accuracy compared to existing methods, while maintaining good image quality. The approach remains effective even when only a fraction of training images are modified.
Approach
FakeTracer inserts two types of traces (sustainable and erasable) into training images of the target identity. The sustainable traces remain in the generated deepfakes, while the erasable traces are removed, allowing for detection based on the presence or absence of these traces.
Datasets
Celeb-DF, DFDC, FaceForensics++
Model(s)
A custom auto-encoder architecture is used to simulate the face-swap deepfake model's encoding-decoding process. The Celeb-DF deepfake model is used for experiments.
Author countries
China, USA