Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models

Authors: Namhyuk Ahn, KiYoon Yoo, Wonhyuk Ahn, Daesik Kim, Seung-Hun Nam

Published: 2024-12-16 03:46:45+00:00

AI Summary

This paper introduces FastProtect, a novel image protection framework against diffusion model-based mimicry. It achieves this by using perturbation pre-training with a mixture-of-perturbations approach and adaptive inference schemes, resulting in comparable protection efficacy with significantly improved invisibility and drastically reduced inference time.

Abstract

Recent advancements in diffusion models revolutionize image generation but pose risks of misuse, such as replicating artworks or generating deepfakes. Existing image protection methods, though effective, struggle to balance protection efficacy, invisibility, and latency, thus limiting practical use. We introduce perturbation pre-training to reduce latency and propose a mixture-of-perturbations approach that dynamically adapts to input images to minimize performance degradation. Our novel training strategy computes protection loss across multiple VAE feature spaces, while adaptive targeted protection at inference enhances robustness and invisibility. Experiments show comparable protection performance with improved invisibility and drastically reduced inference time. The code and demo are available at https://webtoon.github.io/impasto


Key findings
FastProtect achieves a 200-3500x speedup compared to existing methods while maintaining comparable protection efficacy. It also shows improved invisibility, making it a more practical solution for real-world applications. The model demonstrates robustness across various domains and countermeasures.
Approach
FastProtect uses perturbation pre-training to reduce latency, employing a mixture-of-perturbations that adapts to input images. A multi-layer protection loss is used during training, and adaptive targeted protection and adaptive protection strength are applied during inference to enhance robustness and invisibility.
Datasets
ImageNet, FFHQ, WikiArt, NAVER Webtoon artworks
Model(s)
Stable Diffusion v1.5, LoRA, VAE (used within Stable Diffusion), LPIPS (AlexNet)
Author countries
South Korea