Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries
Authors: Rebecca Umbach, Nicola Henry, Gemma Beard, Colleen Berryessa
Published: 2024-01-26 21:51:49+00:00
AI Summary
This research paper investigates public attitudes and behaviors concerning deepfake pornography, a form of non-consensual synthetic intimate imagery (NSII). A survey of over 16,000 respondents across 10 countries reveals low awareness but strong condemnation of NSII, with low self-reported victimization and perpetration rates.
Abstract
Deepfake technologies have become ubiquitous, democratizing the ability to manipulate photos and videos. One popular use of deepfake technology is the creation of sexually explicit content, which can then be posted and shared widely on the internet. Drawing on a survey of over 16,000 respondents in 10 different countries, this article examines attitudes and behaviors related to deepfake pornography as a specific form of non-consensual synthetic intimate imagery (NSII). Our study found that deepfake pornography behaviors were considered harmful by respondents, despite nascent societal awareness. Regarding the prevalence of deepfake porn victimization and perpetration, 2.2% of all respondents indicated personal victimization, and 1.8% all of respondents indicated perpetration behaviors. Respondents from countries with specific legislation still reported perpetration and victimization experiences, suggesting NSII laws are inadequate to deter perpetration. Approaches to prevent and reduce harms may include digital literacy education, as well as enforced platform policies, practices, and tools which better detect, prevent, and respond to NSII content.