X-Edit: Detecting and Localizing Edits in Images Altered by Text-Guided Diffusion Models

Authors: Valentina Bazyleva, Nicolo Bonettini, Gaurav Bharaj

Published: 2025-05-16 23:29:38+00:00

AI Summary

X-Edit is a novel method for localizing diffusion-based edits in images by inverting the image using a pretrained diffusion model and then using these inverted features as input to a segmentation network. It uses a combined segmentation and relevance loss for finetuning, improving the localization of subtle edits.

Abstract

Text-guided diffusion models have significantly advanced image editing, enabling highly realistic and local modifications based on textual prompts. While these developments expand creative possibilities, their malicious use poses substantial challenges for detection of such subtle deepfake edits. To this end, we introduce Explain Edit (X-Edit), a novel method for localizing diffusion-based edits in images. To localize the edits for an image, we invert the image using a pretrained diffusion model, then use these inverted features as input to a segmentation network that explicitly predicts the edited masked regions via channel and spatial attention. Further, we finetune the model using a combined segmentation and relevance loss. The segmentation loss ensures accurate mask prediction by balancing pixel-wise errors and perceptual similarity, while the relevance loss guides the model to focus on low-frequency regions and mitigate high-frequency artifacts, enhancing the localization of subtle edits. To the best of our knowledge, we are the first to address and model the problem of localizing diffusion-based modified regions in images. We additionally contribute a new dataset of paired original and edited images addressing the current lack of resources for this task. Experimental results demonstrate that X-Edit accurately localizes edits in images altered by text-guided diffusion models, outperforming baselines in PSNR and SSIM metrics. This highlights X-Edit's potential as a robust forensic tool for detecting and pinpointing manipulations introduced by advanced image editing techniques.


Key findings
X-Edit accurately localizes edits in images altered by text-guided diffusion models, outperforming baselines in PSNR and SSIM metrics. The finetuned X-Edit model shows superior performance in localizing edits, especially in complex regions, while maintaining low false positive rates on unedited images.
Approach
X-Edit inverts an image using a pretrained diffusion model, then feeds the inverted features and other image information to a U-Net with CBAM blocks for segmentation. A combined segmentation and relevance loss function is used during finetuning to enhance localization accuracy, particularly for subtle edits.
Datasets
A new dataset of 167,026 paired original and edited images created using InstructPix2Pix and images from the LAION-Aesthetics V2 6.5+ dataset; Flickr30k dataset used for out-of-distribution testing.
Model(s)
U-Net with Convolutional Block Attention Modules (CBAM), compared against SAM, SegFormer, and ViT-B.
Author countries
UNKNOWN