Disrupting Diffusion-based Inpainters with Semantic Digression

Authors: Geonho Son, Juhun Lee, Simon S. Woo

Published: 2024-07-14 17:21:19+00:00

AI Summary

This paper introduces DDD, a Digression guided Diffusion Disruption framework for combating diffusion-based inpainting deepfakes. DDD identifies the most vulnerable diffusion timestep and maximizes the distance between the inpainting instance's hidden states and a semantic-aware centroid, resulting in stronger disruption than existing methods while requiring less GPU memory and time.

Abstract

The fabrication of visual misinformation on the web and social media has increased exponentially with the advent of foundational text-to-image diffusion models. Namely, Stable Diffusion inpainters allow the synthesis of maliciously inpainted images of personal and private figures, and copyrighted contents, also known as deepfakes. To combat such generations, a disruption framework, namely Photoguard, has been proposed, where it adds adversarial noise to the context image to disrupt their inpainting synthesis. While their framework suggested a diffusion-friendly approach, the disruption is not sufficiently strong and it requires a significant amount of GPU and time to immunize the context image. In our work, we re-examine both the minimal and favorable conditions for a successful inpainting disruption, proposing DDD, a Digression guided Diffusion Disruption framework. First, we identify the most adversarially vulnerable diffusion timestep range with respect to the hidden space. Within this scope of noised manifold, we pose the problem as a semantic digression optimization. We maximize the distance between the inpainting instance's hidden states and a semantic-aware hidden state centroid, calibrated both by Monte Carlo sampling of hidden states and a discretely projected optimization in the token space. Effectively, our approach achieves stronger disruption and a higher success rate than Photoguard while lowering the GPU memory requirement, and speeding the optimization up to three times faster.


Key findings
DDD achieves stronger disruption and a higher success rate than Photoguard, the state-of-the-art method. It significantly reduces GPU memory requirements and speeds up optimization by three times. Human evaluation confirms DDD's superior performance in disrupting inpainting-based deepfakes.
Approach
DDD optimizes adversarial perturbations in the context image by targeting the most vulnerable timestep range in the diffusion process. It uses a semantic digression optimization, maximizing the distance between the inpainting's hidden states and a semantic-aware centroid calculated through Monte Carlo sampling and discretely projected optimization in the token space.
Datasets
A dataset of 381 image-prompt pairs with varying inpainting strengths (0.8, 0.9, 1.0) was used for evaluation.
Model(s)
Stable Diffusion inpainters (Runway 1.5v and Stability AI 2.0v) are used for inpainting, and their hidden states are used for the loss function in DDD.
Author countries
South Korea