Culling Misinformation from Gen AI: Toward Ethical Curation and Refinement

Authors: Prerana Khatiwada, Grace Donaher, Jasymyn Navarro, Lokesh Bhatta

Published: 2025-07-17 21:19:47+00:00

AI Summary

This paper analyzes the ethical concerns surrounding the use of generative AI, particularly ChatGPT and deepfakes, focusing on the spread of misinformation and exacerbation of social inequities. It proposes guidelines and policy considerations for mitigating these risks while fostering innovation, emphasizing collaboration among users, developers, and government entities.

Abstract

While Artificial Intelligence (AI) is not a new field, recent developments, especially with the release of generative tools like ChatGPT, have brought it to the forefront of the minds of industry workers and academic folk alike. There is currently much talk about AI and its ability to reshape many everyday processes as we know them through automation. It also allows users to expand their ideas by suggesting things they may not have thought of on their own and provides easier access to information. However, not all of the changes this technology will bring or has brought so far are positive; this is why it is extremely important for all modern people to recognize and understand the risks before using these tools and allowing them to cause harm. This work takes a position on better understanding many equity concerns and the spread of misinformation that result from new AI, in this case, specifically ChatGPT and deepfakes, and encouraging collaboration with law enforcement, developers, and users to reduce harm. Considering many academic sources, it warns against these issues, analyzing their cause and impact in fields including healthcare, education, science, academia, retail, and finance. Lastly, we propose a set of future-facing guidelines and policy considerations to solve these issues while still enabling innovation in these fields, this responsibility falling upon users, developers, and government entities.


Key findings
The paper highlights the potential for misuse of generative AI to spread misinformation and deepen social inequities. It finds a need for multi-faceted solutions involving technological, legislative, and educational approaches. Collaboration among users, developers, and government entities is crucial for responsible AI development and deployment.
Approach
The paper offers a framework for ethical curation and refinement of generative AI by proposing solutions such as civic model registries, semantic provenance tracking, counter-generative systems, and AI literacy modules. It also advocates for stricter legislation and policy coherence to address misuse and hold bad actors accountable.
Datasets
UNKNOWN
Model(s)
UNKNOWN
Author countries
USA