Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm

Authors: Laura Wagner, Eva Cetinic

Published: 2025-05-07 17:43:52+00:00

AI Summary

This study explores the disproportionate rise of NSFW content and deepfakes, particularly targeting women, within the CivitAI open-source text-to-image model-sharing platform. The research analyzes over 40 million images and 230,000 models to identify systemic misogyny and propose interventions to mitigate harm.

Abstract

Open-source text-to-image (TTI) pipelines have become dominant in the landscape of AI-generated visual content, driven by technological advances that enable users to personalize models through adapters tailored to specific tasks. While personalization methods such as LoRA offer unprecedented creative opportunities, they also facilitate harmful practices, including the generation of non-consensual deepfakes and the amplification of misogynistic or hypersexualized content. This study presents an exploratory sociotechnical analysis of CivitAI, the most active platform for sharing and developing open-source TTI models. Drawing on a dataset of more than 40 million user-generated images and over 230,000 models, we find a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals. We also observe a strong influence of internet subcultures on the tools and practices shaping model personalizations and resulting visual media. In response to these findings, we contextualize the emergence of exploitative visual media through feminist and constructivist perspectives on technology, emphasizing how design choices and community dynamics shape platform outcomes. Building on this analysis, we propose interventions aimed at mitigating downstream harm, including improved content moderation, rethinking tool design, and establishing clearer platform policies to promote accountability and consent.


Key findings
The NSFW ratio on CivitAI increased from 41% to 80% between January 2023 and December 2024. Female subjects were disproportionately represented in NSFW images, often hypersexualized. A significant number of models were designed to mimic real individuals, many of whom were women in public professions.
Approach
The researchers conducted a sociotechnical analysis of CivitAI, examining image metadata, model metadata, and training data to quantify NSFW content, demographic patterns in generated images, and thematic biases in models and adapters. They used existing classifiers and large language models to aid their analysis.
Datasets
CivitAI dataset of over 40 million user-generated images and over 230,000 models.
Model(s)
MiVOLO (for age and gender estimation), CLIP (for text embedding), and unspecified large language models for annotation.
Author countries
Switzerland