My Face Is Mine, Not Yours: Facial Protection Against Diffusion Model Face Swapping

Authors: Hon Ming Yam, Zhongliang Guo, Chun Pong Lau

Published: 2025-05-21 10:07:46+00:00

AI Summary

This paper proposes a proactive defense against diffusion-based face swapping deepfakes using adversarial attacks. It introduces a dual-loss adversarial framework that combines face identity loss and inference-step averaging loss to generate robust perturbations efficiently, overcoming limitations of existing methods.

Abstract

The proliferation of diffusion-based deepfake technologies poses significant risks for unauthorized and unethical facial image manipulation. While traditional countermeasures have primarily focused on passive detection methods, this paper introduces a novel proactive defense strategy through adversarial attacks that preemptively protect facial images from being exploited by diffusion-based deepfake systems. Existing adversarial protection methods predominantly target conventional generative architectures (GANs, AEs, VAEs) and fail to address the unique challenges presented by diffusion models, which have become the predominant framework for high-quality facial deepfakes. Current diffusion-specific adversarial approaches are limited by their reliance on specific model architectures and weights, rendering them ineffective against the diverse landscape of diffusion-based deepfake implementations. Additionally, they typically employ global perturbation strategies that inadequately address the region-specific nature of facial manipulation in deepfakes.


Key findings
The proposed method demonstrates robust protection against a wide range of diffusion-based deepfake models, including Face Adapter and REFace, while maintaining reasonable visual fidelity. It outperforms existing methods in disrupting face swapping while maintaining image quality, showcasing its effectiveness as a proactive deepfake countermeasure.
Approach
The authors propose a dual-loss adversarial framework operating in the latent space of Latent Diffusion Models. It uses a face identity loss to target conditional mechanisms in face swapping and an inference-step averaging loss to handle computational challenges, generating robust adversarial perturbations.
Datasets
CelebA-HQ
Model(s)
StableDiffusion v1-5, ArcFace Glint360K (for face ID extraction)
Author countries
Hong Kong, United Kingdom