From Incidents to Insights: Patterns of Responsibility following AI Harms

Authors: Isabel Richards, Claire Benn, Miri Zilka

Published: 2025-05-07 09:59:36+00:00

AI Summary

This paper analyzes the AI Incident Database (AIID) to understand patterns of responsibility following AI harms, shifting focus from preventing technical failures to studying societal responses to incidents. The researchers find that identifiable responsible parties don't guarantee accountability, and the likelihood and nature of responses depend heavily on context, including who was harmed and the level of public outcry.

Abstract

The AI Incident Database was inspired by aviation safety databases, which enable collective learning from failures to prevent future incidents. The database documents hundreds of AI failures, collected from the news and media. However, criticism highlights that the AIID's reliance on media reporting limits its utility for learning about implementation failures. In this paper, we accept that the AIID falls short in its original mission, but argue that by looking beyond technically-focused learning, the dataset can provide new, highly valuable insights: specifically, opportunities to learn about patterns between developers, deployers, victims, wider society, and law-makers that emerge after AI failures. Through a three-tier mixed-methods analysis of 962 incidents and 4,743 related reports from the AIID, we examine patterns across incidents, focusing on cases with public responses tagged in the database. We identify 'typical' incidents found in the AIID, from Tesla crashes to deepfake scams. Focusing on this interplay between relevant parties, we uncover patterns in accountability and social expectations of responsibility. We find that the presence of identifiable responsible parties does not necessarily lead to increased accountability. The likelihood of a response and what it amounts to depends highly on context, including who built the technology, who was harmed, and to what extent. Controversy-rich incidents provide valuable data about societal reactions, including insights into social expectations. Equally informative are cases where controversy is notably absent. This work shows that the AIID's value lies not just in preventing technical failures, but in documenting patterns of harms and of institutional response and social learning around AI incidents. These patterns offer crucial insights for understanding how society adapts to and governs emerging AI technologies.


Key findings
The presence of identifiable responsible parties does not ensure accountability. Responses to incidents vary greatly depending on context (e.g., who was harmed, level of public outcry). Incidents with unknown developers/deployers, often involving deepfakes, frequently stimulate greater societal demand for response and legislative action.
Approach
The study uses a three-tiered mixed-methods approach. First, all incidents are quantitatively analyzed based on actors involved. Second, a subset of incidents with public responses are qualitatively analyzed. Finally, these analyses are combined to identify patterns and contextual factors influencing responses.
Datasets
AI Incident Database (AIID)
Model(s)
UNKNOWN
Author countries
UK