Accuracy nudges are not effective against non-harmful deepfakes

Authors: Juan Jose, Rojas-Constain

Published: 2024-10-18 05:26:30+00:00

AI Summary

This study investigates the effectiveness of accuracy nudges in reducing the sharing of deepfake videos. A survey experiment (n=525) showed that while accuracy nudges reduced the intention to share fake news headlines, they were not effective against a non-harmful AI-generated video.

Abstract

I conducted a preregistered survey experiment (n=525) to assess the effectiveness of accuracy nudges against deepfakes (osf.io/69x17). The results, based on a sample of Colombian participants, replicated previous findings showing that prompting participants to assess the accuracy of a headline at the beginning of the survey significantly decreased their intention to share fake news. However, this effect was not significant when applied to a non-harmful AI-generated video.


Key findings
Accuracy nudges effectively reduced the intention to share fake news headlines, replicating previous findings. However, this effect was not observed for a non-harmful deepfake video, suggesting that the effectiveness of accuracy nudges may vary depending on the type of misinformation and its perceived harmfulness.
Approach
A preregistered survey experiment was conducted with Colombian participants. Participants were exposed to either an accuracy nudge (a prompt to assess headline truthfulness) or a control condition before viewing various media content including a deepfake video. Their intention to share the content was then measured.
Datasets
UNKNOWN. Data collected from a survey experiment (n=525) with Colombian participants.
Model(s)
UNKNOWN. No deepfake detection models were used in this study. The focus was on evaluating the effectiveness of accuracy nudges, not on detection algorithms.
Author countries
Colombia