How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception

Authors: Sahibzada Adil Shahzad, Ammarah Hashmi, Yan-Tsung Peng, Yu Tsao, Hsin-Min Wang

Published: 2024-11-14 08:07:02+00:00

AI Summary

This research explores the use of ChatGPT, a large language model, for audiovisual deepfake detection. The study compares ChatGPT's performance against state-of-the-art multimodal deepfake detection models and human perception, highlighting the importance of prompt engineering and showcasing both the strengths and limitations of LLMs in this context.

Abstract

Multimodal deepfakes involving audiovisual manipulations are a growing threat because they are difficult to detect with the naked eye or using unimodal deep learningbased forgery detection methods. Audiovisual forensic models, while more capable than unimodal models, require large training datasets and are computationally expensive for training and inference. Furthermore, these models lack interpretability and often do not generalize well to unseen manipulations. In this study, we examine the detection capabilities of a large language model (LLM) (i.e., ChatGPT) to identify and account for any possible visual and auditory artifacts and manipulations in audiovisual deepfake content. Extensive experiments are conducted on videos from a benchmark multimodal deepfake dataset to evaluate the detection performance of ChatGPT and compare it with the detection capabilities of state-of-the-art multimodal forensic models and humans. Experimental results demonstrate the importance of domain knowledge and prompt engineering for video forgery detection tasks using LLMs. Unlike approaches based on end-to-end learning, ChatGPT can account for spatial and spatiotemporal artifacts and inconsistencies that may exist within or across modalities. Additionally, we discuss the limitations of ChatGPT for multimedia forensic tasks.


Key findings
ChatGPT's accuracy was comparable to human performance when using well-crafted prompts, achieving up to 65% accuracy. However, its performance significantly lagged behind state-of-the-art AI models which achieved accuracies above 87%. The study emphasizes the importance of prompt engineering for effective deepfake detection using LLMs.
Approach
The authors evaluated ChatGPT's ability to detect deepfakes by feeding it videos and various prompts. Performance was assessed based on different prompt types, comparing results against human evaluations and existing AI models. The analysis considered both visual and audio cues as well as their interaction.
Datasets
FakeAVCeleb dataset
Model(s)
ChatGPT (OpenAI's GPT-4)
Author countries
Taiwan