Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes
Authors: Li Qiwei, Shihui Zhang, Andrew Timothy Kasper, Joshua Ashkinaze, Asia A. Eaton, Sarita Schoenebeck, Eric Gilbert
Published: 2024-09-18 17:01:48+00:00
AI Summary
This audit study investigated the effectiveness of reporting non-consensual intimate media (NCIM) on X (formerly Twitter) using two methods: copyright infringement (DMCA) and non-consensual nudity policy reports. The study found a 100% removal rate for DMCA reports within 25 hours, contrasting sharply with a 0% removal rate for non-consensual nudity reports after three weeks.
Abstract
Non-consensual intimate media (NCIM) inflicts significant harm. Currently, victim-survivors can use two mechanisms to report NCIM - as a non-consensual nudity violation or as copyright infringement. We conducted an audit study of takedown speed of NCIM reported to X (formerly Twitter) of both mechanisms. We uploaded 50 AI-generated nude images and reported half under X's non-consensual nudity reporting mechanism and half under its copyright infringement mechanism. The copyright condition resulted in successful image removal within 25 hours for all images (100% removal rate), while non-consensual nudity reports resulted in no image removal for over three weeks (0% removal rate). We stress the need for targeted legislation to regulate NCIM removal online. We also discuss ethical considerations for auditing NCIM on social platforms.