All in One: Unifying Deepfake Detection, Tampering Localization, and Source Tracing with a Robust Landmark-Identity Watermark

Authors: Junjiang Wu, Liejun Wang, Zhiqing Guo

Published: 2026-02-26 21:57:15+00:00

Comment: Accepted by CVPR 2026

AI Summary

This paper introduces LIDMark, a unified proactive forensics framework designed for deepfake detection, tampering localization, and source tracing. It embeds a 152-dimensional landmark-identity watermark into images and employs a novel Factorized-Head Decoder (FHD) to robustly extract it. The framework utilizes an "intrinsic-extrinsic" consistency check for detection and localization based on recovered facial landmarks, while a classification head decodes a source identifier for tracing.

Abstract

With the rapid advancement of deepfake technology, malicious face manipulations pose a significant threat to personal privacy and social security. However, existing proactive forensics methods typically treat deepfake detection, tampering localization, and source tracing as independent tasks, lacking a unified framework to address them jointly. To bridge this gap, we propose a unified proactive forensics framework that jointly addresses these three core tasks. Our core framework adopts an innovative 152-dimensional landmark-identity watermark termed LIDMark, which structurally interweaves facial landmarks with a unique source identifier. To robustly extract the LIDMark, we design a novel Factorized-Head Decoder (FHD). Its architecture factorizes the shared backbone features into two specialized heads (i.e., regression and classification), robustly reconstructing the embedded landmarks and identifier, respectively, even when subjected to severe distortion or tampering. This design realizes an all-in-one trifunctional forensic solution: the regression head underlies an intrinsic-extrinsic consistency check for detection and localization, while the classification head robustly decodes the source identifier for tracing. Extensive experiments show that the proposed LIDMark framework provides a unified, robust, and imperceptible solution for the detection, localization, and tracing of deepfake content. The code is available at https://github.com/vpsg-research/LIDMark.


Key findings
The LIDMark framework achieves superior visual imperceptibility while embedding a higher-capacity watermark compared to baselines. It demonstrates robust performance for deepfake detection (AUC 0.9388) and tampering localization via the intrinsic-extrinsic consistency check. Furthermore, it achieves the lowest average Bit Error Rate (BER) for source tracing across common distortions and deepfake manipulations, showing strong generalization capabilities.
Approach
The proposed framework embeds a composite 152-D watermark (LIDMark) consisting of tamper-sensitive facial landmarks and a robust source identifier into an image. A Factorized-Head Decoder (FHD), with specialized regression and classification heads, is then used to robustly extract these two components. Deepfake detection and tampering localization are achieved by an "intrinsic-extrinsic" consistency check comparing the recovered intrinsic landmarks with re-detected extrinsic landmarks, while source tracing uses the decoded identifier.
Datasets
CelebA-HQ, LFW
Model(s)
UNKNOWN
Author countries
China