The World of Generative AI: Deepfakes and Large Language Models

Authors: Alakananda Mitra, Saraju P. Mohanty, Elias Kougianos

Published: 2024-02-06 20:18:32+00:00

AI Summary

This paper explores the relationship between deepfakes and large language models (LLMs) within the context of generative AI. It highlights the increasing threat deepfakes pose to society due to their ability to spread misinformation, and examines how LLMs, particularly chatbots like ChatGPT, can be used to enhance the creation of more realistic and convincing deepfakes.

Abstract

We live in the era of Generative Artificial Intelligence (GenAI). Deepfakes and Large Language Models (LLMs) are two examples of GenAI. Deepfakes, in particular, pose an alarming threat to society as they are capable of spreading misinformation and changing the truth. LLMs are powerful language models that generate general-purpose language. However due to its generative aspect, it can also be a risk for people if used with ill intentions. The ethical use of these technologies is a big concern. This short article tries to find out the interrelationship between them.


Key findings
The combination of deepfake technology and LLMs significantly increases the realism and ease of creating deepfakes, enhancing their potential for malicious use and exacerbating the spread of misinformation. Existing efforts to combat deepfakes are insufficient, highlighting the need for stronger regulations and further research in deepfake detection and prevention.
Approach
The paper analyzes the interplay between deepfake technology (using GANs and autoencoders) and LLMs (like ChatGPT) in creating realistic synthetic media. It discusses how LLMs can generate more convincing audio for deepfake videos, making them harder to detect and increasing the risk of misinformation.
Datasets
UNKNOWN
Model(s)
Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), GPT-3, GPT-3.5, GPT-4, PaLM-2, LaMDA, Prometheus model, LLaMa-2, Ferret
Author countries
USA, India, Greece