Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference

Authors: Emilio Ferrara

Published: 2024-06-04 00:26:12+00:00

AI Summary

This paper examines the malicious uses of Generative AI (GenAI) in online election interference, focusing on deepfakes, botnets, targeted misinformation campaigns, and synthetic identities. It highlights the urgent need for mitigation strategies and international cooperation to protect democratic integrity.

Abstract

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) pose significant risks, particularly in the realm of online election interference. This paper explores the nefarious applications of GenAI, highlighting their potential to disrupt democratic processes through deepfakes, botnets, targeted misinformation campaigns, and synthetic identities. By examining recent case studies and public incidents, we illustrate how malicious actors exploit these technologies to try influencing voter behavior, spread disinformation, and undermine public trust in electoral systems. The paper also discusses the societal implications of these threats, emphasizing the urgent need for robust mitigation strategies and international cooperation to safeguard democratic integrity.


Key findings
GenAI poses significant threats to democratic processes through various methods, including deepfakes, botnets, and targeted misinformation. The paper emphasizes the need for a multi-faceted approach to mitigation, involving regulation, technological solutions, public awareness, and international cooperation. The dual nature of GenAI, offering both benefits and risks, necessitates a balanced approach to its development and deployment.
Approach
The authors conducted a systematic literature review of academic databases (Google Scholar, Scopus, IEEE Xplore) and examined case studies to identify nefarious applications of GenAI in online election interference. They analyzed the threats posed by different modalities of GenAI (text, audio, video) and proposed mitigation strategies.
Datasets
UNKNOWN. The paper mentions several case studies and real-world examples, but does not specify the use of a particular dataset for model training or evaluation.
Model(s)
UNKNOWN. The paper discusses the use of Generative Adversarial Networks (GANs) for deepfake creation and Large Language Models (LLMs) for generating text, but does not specify any specific model architectures used for deepfake detection.
Author countries
USA