FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through Identity Obfuscation

Authors: Li Wang, Zheng Li, Xuhong Zhang, Shouling Ji, Shanqing Guo

Published: 2025-02-15 13:45:19+00:00

AI Summary

FaceSwapGuard (FSG) is a novel black-box defense mechanism against deepfake face-swapping. It introduces imperceptible perturbations to facial images, disrupting identity encoders and causing face-swapping techniques to generate images with identities significantly different from the original.

Abstract

DeepFakes pose a significant threat to our society. One representative DeepFake application is face-swapping, which replaces the identity in a facial image with that of a victim. Although existing methods partially mitigate these risks by degrading the quality of swapped images, they often fail to disrupt the identity transformation effectively. To fill this gap, we propose FaceSwapGuard (FSG), a novel black-box defense mechanism against deepfake face-swapping threats. Specifically, FSG introduces imperceptible perturbations to a user's facial image, disrupting the features extracted by identity encoders. When shared online, these perturbed images mislead face-swapping techniques, causing them to generate facial images with identities significantly different from the original user. Extensive experiments demonstrate the effectiveness of FSG against multiple face-swapping techniques, reducing the face match rate from 90% (without defense) to below 10%. Both qualitative and quantitative studies further confirm its ability to confuse human perception, highlighting its practical utility. Additionally, we investigate key factors that may influence FSG and evaluate its robustness against various adaptive adversaries.


Key findings
FSG significantly reduced the face match rate from over 90% (without defense) to below 10% on multiple APIs. The approach proved robust against adaptive adversaries and effective across various face-swapping models, including diffusion-based models. Qualitative and quantitative results confirmed its ability to confuse both machine and human perception.
Approach
FSG adds imperceptible perturbations to a user's facial image to disrupt features extracted by identity encoders. A surrogate model is used to compute these perturbations in a model-agnostic manner, maximizing identity deviation while maintaining visual similarity.
Datasets
CelebA-HQ
Model(s)
FaceNet, ArcFace, CosFace, FaceShifter, SimSwap, FSGAN, and a diffusion-based model (Diff-AE)
Author countries
China