Characterizing AI-Generated Misinformation on Social Media
Authors: Chiara Drolsbach, Nicolas Pröllochs
Published: 2025-05-15 13:18:04+00:00
AI Summary
This study performs a large-scale empirical analysis of AI-generated misinformation on the social media platform X, examining 91,452 misleading posts flagged by X's Community Notes. The research reveals unique characteristics of AI-generated misinformation compared to conventional forms, focusing on content attributes, source accounts, virality, and believability.
Abstract
AI-generated misinformation (e.g., deepfakes) poses a growing threat to information integrity on social media. However, prior research has largely focused on its potential societal consequences rather than its real-world prevalence. In this study, we conduct a large-scale empirical analysis of AI-generated misinformation on the social media platform X. Specifically, we analyze a dataset comprising N=91,452 misleading posts, both AI-generated and non-AI-generated, that have been identified and flagged through X's Community Notes platform. Our analysis yields four main findings: (i) AI-generated misinformation is more often centered on entertaining content and tends to exhibit a more positive sentiment than conventional forms of misinformation, (ii) it is more likely to originate from smaller user accounts, (iii) despite this, it is significantly more likely to go viral, and (iv) it is slightly less believable and harmful compared to conventional misinformation. Altogether, our findings highlight the unique characteristics of AI-generated misinformation on social media. We discuss important implications for platforms and future research.