SLIC: Secure Learned Image Codec through Compressed Domain Watermarking to Defend Image Manipulation

Authors: Chen-Hsiu Huang, Ja-Ling Wu

Published: 2024-10-19 11:42:36+00:00

AI Summary

This paper introduces SLIC, a secure learned image codec that embeds watermarks in the compressed domain to defend against image manipulation. If a watermarked image is tampered with and re-compressed, it degrades significantly in quality, revealing the manipulation.

Abstract

The digital image manipulation and advancements in Generative AI, such as Deepfake, has raised significant concerns regarding the authenticity of images shared on social media. Traditional image forensic techniques, while helpful, are often passive and insufficient against sophisticated tampering methods. This paper introduces the Secure Learned Image Codec (SLIC), a novel active approach to ensuring image authenticity through watermark embedding in the compressed domain. SLIC leverages neural network-based compression to embed watermarks as adversarial perturbations in the latent space, creating images that degrade in quality upon re-compression if tampered with. This degradation acts as a defense mechanism against unauthorized modifications. Our method involves fine-tuning a neural encoder/decoder to balance watermark invisibility with robustness, ensuring minimal quality loss for non-watermarked images. Experimental results demonstrate SLIC's effectiveness in generating visible artifacts in tampered images, thereby preventing their redistribution. This work represents a significant step toward developing secure image codecs that can be widely adopted to safeguard digital image integrity.


Key findings
SLIC effectively generates visible artifacts in tampered and re-compressed images. The method shows robustness against various image editing operations, although some filtering operations (like JPEG recompression) reduce the effectiveness. Watermarked images maintain good quality with minimal overhead.
Approach
SLIC fine-tunes a neural encoder/decoder to embed watermarks as adversarial perturbations in the latent space during compression. Re-compression of a tampered image causes visible degradation, acting as a defense mechanism.
Datasets
COCO dataset (for training), Kodak, DIV2K, and CelebA datasets (for testing)
Model(s)
Balle2018, Minnen2018, and Cheng2020 neural image codecs from CompressAI.
Author countries
Taiwan