Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection
Authors: Delong Zhu, Yuezun Li, Baoyuan Wu, Jiaran Zhou, Zhibo Wang, Siwei Lyu
Published: 2024-12-02 04:17:48+00:00
AI Summary
This paper proposes FacePoison, a proactive DeepFake defense framework that sabotages face detection, a critical pre-processing step for most DeepFake methods. By introducing imperceptible adversarial perturbations to distort extracted faces, FacePoison disrupts DeepFake model training or synthesis. The authors also introduce VideoFacePoison, an extension that propagates these perturbations across video frames using optical flow, reducing computational overhead while maintaining effectiveness.
Abstract
This paper investigates the feasibility of a proactive DeepFake defense framework, {\\em FacePosion}, to prevent individuals from becoming victims of DeepFake videos by sabotaging face detection. The motivation stems from the reliance of most DeepFake methods on face detectors to automatically extract victim faces from videos for training or synthesis (testing). Once the face detectors malfunction, the extracted faces will be distorted or incorrect, subsequently disrupting the training or synthesis of the DeepFake model. To achieve this, we adapt various adversarial attacks with a dedicated design for this purpose and thoroughly analyze their feasibility. Based on FacePoison, we introduce {\\em VideoFacePoison}, a strategy that propagates FacePoison across video frames rather than applying them individually to each frame. This strategy can largely reduce the computational overhead while retaining the favorable attack performance. Our method is validated on five face detectors, and extensive experiments against eleven different DeepFake models demonstrate the effectiveness of disrupting face detectors to hinder DeepFake generation.