Robust Identity Perceptual Watermark Against Deepfake Face Swapping

Authors: Tianyi Wang, Mengxiao Huang, Harry Cheng, Bin Ma, Yinglong Wang

Published: 2023-11-02 16:04:32+00:00

AI Summary

This paper introduces a robust identity perceptual watermarking framework for proactive deepfake face swapping detection and source tracing. It embeds watermarks with identity semantics, using chaotic encryption for security, and an encoder-decoder framework for robust embedding and recovery, outperforming existing methods in both detection accuracy and generalization.

Abstract

Notwithstanding offering convenience and entertainment to society, Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models. Due to imperceptible artifacts in high-quality synthetic images, passive detection models against face swapping in recent years usually suffer performance damping regarding the generalizability issue. Therefore, several studies have been attempted to proactively protect the original images against malicious manipulations by inserting invisible signals in advance. However, the existing proactive defense approaches demonstrate unsatisfactory results with respect to visual quality, detection accuracy, and source tracing ability. In this study, to fulfill the research gap, we propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping proactively. We assign identity semantics regarding the image contents to the watermarks and devise an unpredictable and nonreversible chaotic encryption system to ensure watermark confidentiality. The watermarks are encoded and recovered by jointly training an encoder-decoder framework along with adversarial image manipulations. Falsification and source tracing are accomplished by justifying the consistency between the content-matched identity perceptual watermark and the recovered robust watermark from the image. Extensive experiments demonstrate state-of-the-art detection performance on Deepfake face swapping under both cross-dataset and cross-manipulation settings.


Key findings
The proposed method achieves state-of-the-art watermark recovery accuracy (above 96%) and deepfake detection performance (AUC scores above 97%) under cross-dataset and cross-manipulation settings. It significantly outperforms existing passive deepfake detection methods and robust watermarking techniques.
Approach
The authors propose a watermarking framework that embeds identity-based watermarks into images using an encoder-decoder network trained with adversarial examples. Detection and source tracing are achieved by comparing the recovered watermark with the expected watermark based on the image content.
Datasets
CelebA-HQ, LFW
Model(s)
Encoder-decoder network with convolutional neural networks (CNNs), squeeze-and-excitation networks (SENets), and diffusion blocks; a discriminator; and a chaotic encryption system.
Author countries
Singapore, China