Deepfake Labels Restore Reality, Especially for Those Who Dislike the Speaker

Authors: Nathan L. Tenhundfeld, Ryan Weber, William I. MacKenzie, Hannah M. Barr, Candice Lanius

Published: 2024-03-29 17:11:13+00:00

AI Summary

This study investigated whether labeling deepfake videos of President Biden as such improved participants' ability to distinguish them from real videos. Participants accurately recalled 93.8% of deepfake videos and 84.2% of actual videos, suggesting that labeling videos can aid in combating misinformation.

Abstract

Deepfake videos create dangerous possibilities for public misinformation. In this experiment (N=204), we investigated whether labeling videos as containing actual or deepfake statements from US President Biden helps participants later differentiate between true and fake information. People accurately recalled 93.8% of deepfake videos and 84.2% of actual videos, suggesting that labeling videos can help combat misinformation. Individuals who identify as Republican and had lower favorability ratings of Biden performed better in distinguishing between actual and deepfake videos, a result explained by the elaboration likelihood model (ELM), which predicts that people who distrust a message source will more critically evaluate the message.


Key findings
Participants showed high accuracy in recalling whether statements came from deepfake or actual videos (93.8% and 84.2% respectively). Republicans and individuals with lower favorability ratings of Biden performed better at distinguishing between the two, possibly due to increased message scrutiny as predicted by the Elaboration Likelihood Model. A bias towards labeling videos as deepfakes was observed even when they were authentic.
Approach
The researchers created deepfake videos of President Biden using existing techniques and paired them with real videos of Biden discussing similar topics. Participants viewed the videos with labels indicating authenticity, then recalled whether statements came from real or deepfake videos. Their recall accuracy was analyzed in relation to political affiliation and attitudes towards Biden.
Datasets
UNKNOWN. The authors generated their own dataset of deepfake and real videos of President Biden.
Model(s)
First Order Motion Model for Image Animation (FOMM-IA) and Celebrity Voice Changer Parody (CVCP) application.
Author countries
USA