Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks

Authors: Hao-Ping Lee, Yu-Ju Yang, Thomas Serban von Davier, Jodi Forlizzi, Sauvik Das

Published: 2023-10-11 20:40:38+00:00

AI Summary

This research paper presents a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. The taxonomy categorizes how AI capabilities and data requirements create new privacy risks or exacerbate existing ones, revealing that AI significantly alters privacy risks.

Abstract

Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, differential privacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.


Key findings
AI technologies significantly alter privacy risks in approximately 93% of analyzed incidents, either creating new risks (e.g., deepfakes, physiognomy) or exacerbating existing ones (e.g., increased surveillance). Current privacy-preserving AI/ML methods only address a subset of these risks, highlighting a need for AI-specific guidance.
Approach
The authors analyzed 321 real-world AI privacy incidents from a crowdsourced database, categorizing each incident based on whether AI created new privacy risks, exacerbated existing risks (using Solove's 2006 taxonomy as a baseline), or did not meaningfully change the risk. A new category of risk, 'Phrenology/Physiognomy', was identified.
Datasets
AI, Algorithmic, and Automation Incident and Controversy (AIAAIC) repository; a subset of incidents from the AI Incident Database (AIID)
Model(s)
UNKNOWN
Author countries
United States, United Kingdom