PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators

Authors: Nils Lukas, Florian Kerschbaum

Published: 2023-04-14 19:44:37+00:00

AI Summary

This paper introduces Pivotal Tuning Watermarking (PTW), a method for watermarking pre-trained image generators that is three orders of magnitude faster than existing techniques and doesn't require training data. However, the research reveals that this watermarking is not robust against adaptive white-box attacks.

Abstract

Deepfakes refer to content synthesized using deep generators, which, when misused, have the potential to erode trust in digital media. Synthesizing high-quality deepfakes requires access to large and complex generators only a few entities can train and provide. The threat is malicious users that exploit access to the provided model and generate harmful deepfakes without risking detection. Watermarking makes deepfakes detectable by embedding an identifiable code into the generator that is later extractable from its generated images. We propose Pivotal Tuning Watermarking (PTW), a method for watermarking pre-trained generators (i) three orders of magnitude faster than watermarking from scratch and (ii) without the need for any training data. We improve existing watermarking methods and scale to generators $4 times$ larger than related work. PTW can embed longer codes than existing methods while better preserving the generator's image quality. We propose rigorous, game-based definitions for robustness and undetectability, and our study reveals that watermarking is not robust against an adaptive white-box attacker who controls the generator's parameters. We propose an adaptive attack that can successfully remove any watermarking with access to only 200 non-watermarked images. Our work challenges the trustworthiness of watermarking for deepfake detection when the parameters of a generator are available. The source code to reproduce our experiments is available at https://github.com/nilslukas/gan-watermark.


Key findings
PTW significantly improves the speed and efficiency of watermarking pre-trained generators. However, the study demonstrates that the watermark is vulnerable to white-box attacks, particularly a novel Reverse Pivotal Tuning attack, which can effectively remove watermarks using a limited number of clean images. Black-box attacks are less effective.
Approach
PTW embeds watermarks into pre-trained generative models by fine-tuning a copy of the generator while preserving its original latent space mapping. This is achieved using a loss function that balances image quality preservation with watermark embedding, significantly speeding up the process compared to training from scratch.
Datasets
FFHQ (256x256 and 1024x1024 resolutions), AFHQv2
Model(s)
StyleGAN2, StyleGAN3, StyleGAN-XL
Author countries
Canada