Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks

Authors: Juniper Lovato, Laurent Hébert-Dufresne, Jonathan St-Onge, Randall Harp, Gabriela Salazar Lopez, Sean P. Rogers, Ijaz Ul Haq, Jeremiah Onaolapo

Published: 2022-10-18 17:49:53+00:00

AI Summary

This research investigates how human biases affect the detection of deepfakes, a form of diverse misinformation. Using a survey (N=2,016) exposing participants to videos, the study analyzes how demographic matching between participants and deepfake personas influences detection accuracy. A mathematical model then extrapolates these findings to understand population-level impacts.

Abstract

Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call diverse misinformation the complex relationships between human biases and demographics represented in misinformation. To investigate how users' biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1) their classification as misinformation is more objective; 2) we can control the demographics of the personas presented; 3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N=2,016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide herd correction where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.


Key findings
Unprimed participants showed only 51% accuracy in detecting deepfakes. Accuracy varied significantly based on demographic matching; participants were better at classifying videos aligning with their own demographics (homophily bias). A mathematical model suggested that diverse social networks may offer "herd correction", reducing misinformation susceptibility.
Approach
The study uses an observational survey where participants view videos (some deepfakes) and answer questions about their attributes. Their accuracy in identifying deepfakes is analyzed based on demographic factors (participant and video persona), prior knowledge, and social media usage. A mathematical model is employed to simulate the spread of misinformation within a network.
Datasets
Facebook Deepfake Detection Challenge (DFDC) Preview Dataset
Model(s)
UNKNOWN (No specific deepfake detection model is used in the main contribution; the focus is on human biases)
Author countries
U.S.A.