The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

Authors: Emilio Ferrara

Published: 2026-01-01 10:58:51+00:00

AI Summary

This paper argues that Generative AI (GenAI) creates "synthetic realities" across text, images, audio, and video, leading to a systemic erosion of trust rather than just isolated deepfakes. It formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), expands a taxonomy of GenAI harms, and details qualitative shifts introduced by the technology. The paper concludes by proposing a multi-layered mitigation stack and a research agenda to address the "Generative AI Paradox," where societies may rationally discount digital evidence altogether.

Abstract

Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as deepfakes or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023-2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether.


Key findings
The most consequential risk of GenAI is the progressive erosion of shared epistemic ground and institutional verification practices, leading to 'synthetic realities' that are difficult to audit. This culminates in the 'Generative AI Paradox,' where the ubiquity and indistinguishability of synthetic media may cause societies to rationally discount digital evidence altogether. Mitigation requires a layered approach, including provenance infrastructure, platform governance, institutional workflow redesign, and public resilience, supported by research into epistemic security metrics.
Approach
The paper addresses the problem by formalizing 'synthetic reality' as a layered stack and expanding a taxonomy of GenAI harms across personal, economic, informational, and socio-technical risks. It articulates qualitative shifts introduced by GenAI and synthesizes recent risk realizations into a case bank. A multi-layered mitigation stack is proposed, alongside a research agenda focused on measuring epistemic security.
Datasets
UNKNOWN
Model(s)
UNKNOWN
Author countries
USA