Leveraging Optimization for Adaptive Attacks on Image Watermarks

Authors: Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, Florian Kerschbaum

Published: 2023-09-29 03:36:42+00:00

AI Summary

This paper introduces adaptive, learnable attacks against image watermarks, framing the attacks as an optimization problem. By creating differentiable surrogate watermarking keys, the authors demonstrate these attacks can break five surveyed watermarking methods with negligible image quality degradation.

Abstract

Untrustworthy users can misuse image generators to synthesize high-quality deepfakes and engage in unethical activities. Watermarking deters misuse by marking generated content with a hidden message, enabling its detection using a secret watermarking key. A core security property of watermarking is robustness, which states that an attacker can only evade detection by substantially degrading image quality. Assessing robustness requires designing an adaptive attack for the specific watermarking algorithm. When evaluating watermarking algorithms and their (adaptive) attacks, it is challenging to determine whether an adaptive attack is optimal, i.e., the best possible attack. We solve this problem by defining an objective function and then approach adaptive attacks as an optimization problem. The core idea of our adaptive attacks is to replicate secret watermarking keys locally by creating surrogate keys that are differentiable and can be used to optimize the attack's parameters. We demonstrate for Stable Diffusion models that such an attacker can break all five surveyed watermarking methods at no visible degradation in image quality. Optimizing our attacks is efficient and requires less than 1 GPU hour to reduce the detection accuracy to 6.3% or less. Our findings emphasize the need for more rigorous robustness testing against adaptive, learnable attackers.


Key findings
Adaptive attacks successfully evaded detection for all five watermarking methods tested (TRW, WDM, DWT, DWT-SVD, RivaGAN) with minimal to no visible image quality degradation. Adversarial compression proved particularly effective across all methods.
Approach
The approach defines an objective function that maximizes watermark evasion while minimizing image quality degradation. It leverages optimization by creating differentiable surrogate watermarking keys and using them in two learnable attacks: Adversarial Noising and Adversarial Compression.
Datasets
LAION-5B, LAION-2B, LAION-HD, MS-COCO-2017
Model(s)
Stable Diffusion (versions 1.1 and 2.0), ResNet-50 (for key generation)
Author countries
Canada