The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adversarial Dynamics in Synthetic Media

Authors: Claire Leibowicz, Sean McGregor, Aviv Ovadya

Published: 2021-02-11 16:44:09+00:00

AI Summary

This paper explores the "detection dilemma" in synthetic media, where improved detection methods are quickly circumvented by adversaries. It examines this dilemma through a multistakeholder lens, assessing detection contexts and adversary capabilities to inform detection process decisions and policies.

Abstract

Synthetic media detection technologies label media as either synthetic or non-synthetic and are increasingly used by journalists, web platforms, and the general public to identify misinformation and other forms of problematic content. As both well-resourced organizations and the non-technical general public generate more sophisticated synthetic media, the capacity for purveyors of problematic content to adapt induces a newterm{detection dilemma}: as detection practices become more accessible, they become more easily circumvented. This paper describes how a multistakeholder cohort from academia, technology platforms, media entities, and civil society organizations active in synthetic media detection and its socio-technical implications evaluates the detection dilemma. Specifically, we offer an assessment of detection contexts and adversary capacities sourced from the broader, global AI and media integrity community concerned with mitigating the spread of harmful synthetic media. A collection of personas illustrates the intersection between unsophisticated and highly-resourced sponsors of misinformation in the context of their technical capacities. This work concludes that there is no best approach to navigating the detector dilemma, but derives a set of implications from multistakeholder input to better inform detection process decisions and policies, in practice.


Key findings
There's no single best approach to the detection dilemma; a "goldilocks exposure" of detection models is suggested, balancing sharing for improvement against adversary exploitation. The study emphasizes the importance of considering not only detection tools but also broader strategies like app store policies and user training to ensure responsible and accessible deployment.
Approach
The research uses a multistakeholder approach, involving workshops and consultations with experts from academia, technology platforms, media entities, and civil society. It analyzes various detection contexts and adversary capabilities, categorizing actors by technical competency and anti-detection competence, and develops personas to illustrate these dynamics.
Datasets
UNKNOWN
Model(s)
Neural networks (specifically, those used in the DeepFake Detection Challenge are mentioned)
Author countries
USA