Can Generative Models Actually Forge Realistic Identity Documents?

Authors: Alexander Vinogradov

Published: 2025-12-25 00:56:50+00:00

Comment: 11 pages, 16 figures

AI Summary

This paper investigates whether contemporary open-source diffusion-based generative models can produce identity document forgeries realistic enough to bypass human or automated verification. It finds that while these models can simulate surface-level document aesthetics, they consistently fail to reproduce structural and forensic authenticity. The study concludes that the risk of generative identity document deepfakes achieving forensic-level authenticity may be overestimated in their current out-of-the-box form.

Abstract

Generative image models have recently shown significant progress in image realism, leading to public concerns about their potential misuse for document forgery. This paper explores whether contemporary open-source and publicly accessible diffusion-based generative models can produce identity document forgeries that could realistically bypass human or automated verification systems. We evaluate text-to-image and image-to-image generation pipelines using multiple publicly available generative model families, including Stable Diffusion, Qwen, Flux, Nano-Banana, and others. The findings indicate that while current generative models can simulate surface-level document aesthetics, they fail to reproduce structural and forensic authenticity. Consequently, the risk of generative identity document deepfakes achieving forensic-level authenticity may be overestimated, underscoring the value of collaboration between machine learning practitioners and document-forensics experts in realistic risk assessment.


Key findings
Generative models can reproduce the overall visual structure and aesthetics of identity documents but systematically fail to replicate fine-grained material, manufacturing characteristics, and security features. Generated documents consistently exhibit distinct digital artifacts, unstable typography, and oversmoothed textures, indicating they cannot achieve forensic-level authenticity. The risk of out-of-the-box generative models producing fully authentic identity document forgeries is currently overestimated, especially against verification systems incorporating material and texture-level analysis.
Approach
The authors evaluate text-to-image and image-to-image generation pipelines using multiple publicly available diffusion-based generative models (e.g., Stable Diffusion, Qwen, Flux). They generate identity document forgeries through various scenarios, including full document generation, blending templates into backgrounds, and manipulating portraits and text. The generated outputs are then assessed from a manual document-forensics perspective for realism and authenticity.
Datasets
Privately owned identity documents and physical plastic cards owned by the author for illustrative and controlled examples.
Model(s)
UNKNOWN
Author countries
Germany