Video Deepfake Abuse: How Company Choices Predictably Shape Misuse Patterns
Authors: Max Kamachee, Stephen Casper, Michelle L. Ding, Rui-Jie Yew, Anka Reuel, Stella Biderman, Dylan Hadfield-Menell
Published: 2025-11-26 18:59:43+00:00
AI Summary
This paper analyzes how choices by companies releasing open-weight video generation models predictably shape patterns of AI-generated non-consensual intimate imagery (AIG-NCII) misuse. It observes that a small number of these models have become dominant tools for videorealistic AIG-NCII video generation, mirroring past trends with image generators. The authors argue that robust risk management and safeguards from developers and distributors are crucial for mitigating downstream harm.
Abstract
In 2022, AI image generators crossed a key threshold, enabling much more efficient and dynamic production of photorealistic deepfake images than before. This enabled opportunities for creative and positive uses of these models. However, it also enabled unprecedented opportunities for the low-effort creation of AI-generated non-consensual intimate imagery (AIG-NCII), including AI-generated child sexual abuse material (AIG-CSAM). Empirically, these harms were principally enabled by a small number of models that were trained on web data with pornographic content, released with open weights, and insufficiently safeguarded. In this paper, we observe ways in which the same patterns are emerging with video generation models in 2025. Specifically, we analyze how a small number of open-weight AI video generation models have become the dominant tools for videorealistic AIG-NCII video generation. We then analyze the literature on model safeguards and conclude that (1) developers who openly release the weights of capable video generation models without appropriate data curation and/or post-training safeguards foreseeably contribute to mitigatable downstream harm, and (2) model distribution platforms that do not proactively moderate individual misuse or models designed for AIG-NCII foreseeably amplify this harm. While there are no perfect defenses against AIG-NCII and AIG-CSAM from open-weight AI models, we argue that risk management by model developers and distributors, informed by emerging safeguard techniques, will substantially affect the future ease of creating AIG-NCII and AIG-CSAM with generative AI video tools.