Facial Features Matter: a Dynamic Watermark based Proactive Deepfake Detection Approach

Authors: Shulin Lan, Kanlin Liu, Yazhou Zhao, Chen Yang, Yingchao Wang, Xingshan Yao, Liehuang Zhu

Published: 2024-11-22 08:49:08+00:00

AI Summary

This paper proposes FaceProtect, a proactive deepfake detection method using dynamic watermarks based on facial features. It introduces a GAN-based one-way dynamic watermark generation mechanism and a watermark verification strategy to detect deepfakes by analyzing changes in facial characteristics during manipulation.

Abstract

Current passive deepfake face-swapping detection methods encounter significance bottlenecks in model generalization capabilities. Meanwhile, proactive detection methods often use fixed watermarks which lack a close relationship with the content they protect and are vulnerable to security risks. Dynamic watermarks based on facial features offer a promising solution, as these features provide unique identifiers. Therefore, this paper proposes a Facial Feature-based Proactive deepfake detection method (FaceProtect), which utilizes changes in facial characteristics during deepfake manipulation as a novel detection mechanism. We introduce a GAN-based One-way Dynamic Watermark Generating Mechanism (GODWGM) that uses 128-dimensional facial feature vectors as inputs. This method creates irreversible mappings from facial features to watermarks, enhancing protection against various reverse inference attacks. Additionally, we propose a Watermark-based Verification Strategy (WVS) that combines steganography with GODWGM, allowing simultaneous transmission of the benchmark watermark representing facial features within the image. Experimental results demonstrate that our proposed method maintains exceptional detection performance and exhibits high practicality on images altered by various deepfake techniques.


Key findings
FaceProtect demonstrates high detection accuracy and generalizability across various deepfake techniques. The use of dynamic watermarks based on facial features significantly improves performance compared to methods using fixed watermarks or sequences. The proposed method outperforms several state-of-the-art passive deepfake detection methods.
Approach
FaceProtect embeds dynamic watermarks generated from 128-dimensional facial feature vectors into images using a GAN and steganography. Deepfakes are detected by comparing the recovered watermark with a watermark regenerated from the facial features of the potentially altered image.
Datasets
CelebA (for training GODWGM and WVS), MNIST (for training WGAN-GP in GODWGM), and datasets generated using InfoSwap, SimSwap, StyleGAN2, and AttGAN (for testing).
Model(s)
WGAN-GP (for dynamic watermark generation), U-Net and SENet (for watermark embedding and extraction), Convolutional Neural Network (for watermark recovery).
Author countries
China