pith. machine review for the scientific record. sign in

arxiv: 2408.12622 · v3 · submitted 2024-08-14 · 💻 cs.AI · cs.CR· cs.ET· cs.LG· cs.SY· eess.SY

Recognition: unknown

The AI risk repository: A meta-review, database, and taxonomy of risks from artificial intelligence

Authors on Pith no claims yet
classification 💻 cs.AI cs.CRcs.ETcs.LGcs.SYeess.SY
keywords risksrisksystemscomprehensiveartificialcommoncreatesdescribe
0
0 comments X
read the original abstract

Artificial intelligence (AI) is reshaping society, from video generation to medical diagnosis, coding agents to autonomous vehicles. Yet researchers, policymakers, and technology companies lack shared terminology for discussing AI risks. Consider "privacy": one framework uses this term to describe a model's ability to leak sensitive training data, while another uses it to mean freedom from government surveillance. Conversely, researchers have introduced "Goodhart's law," "specification gaming," "reward hacking," and "mesa-optimization" to describe the same phenomenon of AI systems optimizing for measured proxies rather than intended goals. This terminological diversity creates friction: comparing findings across studies requires mapping between frameworks, and comprehensive risk coverage requires consulting multiple taxonomies that use different organizing principles. This paper addresses this challenge by creating a comprehensive catalog of AI risks. We systematically analyzed every major AI risk framework published to date-74 frameworks containing 1,725 distinct risks-and organized them into a unified system. Our two classification systems reveal important patterns: contrary to common assumptions, human decisions cause nearly as many AI risks (38%) as the AI systems themselves (42%). The work provides practical tools for anyone working on AI safety, from developers conducting risk assessments to policymakers writing regulations to auditors evaluating AI systems. By establishing a common reference point, this repository creates the foundation for more coordinated and comprehensive approaches to managing AI's risks while realizing its benefits.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Culturally Aware GenAI Risks for Youth: Perspectives from Youth, Parents, and Teachers in a Non-Western Context

    cs.HC 2026-04 unverdicted novelty 7.0

    Mixed-methods research in Saudi Arabia reveals that GenAI use by youth creates culturally specific privacy and safety risks tied to family honor and shared accounts, requiring context-sensitive design.

  2. To Build or Not to Build? Factors that Lead to Non-Development or Abandonment of AI Systems

    cs.CY 2026-04 unverdicted novelty 6.0

    A scoping review and empirical analysis produce a six-category taxonomy of factors driving AI non-development and abandonment, showing that practical issues like resource limits and organizational dynamics often outwe...

  3. What People See (and Miss) About Generative AI Risks: Perceptions of Failures, Risks, and Who Should Address Them

    cs.HC 2026-04 unverdicted novelty 4.0

    A validated survey instrument grounded in real GenAI incidents reveals public perceptions of failure modes, risks, and stakeholder responsibilities, showing potential for guiding AI literacy efforts.

  4. Brainrot: Deskilling and Addiction are Overlooked AI Risks

    cs.CY 2026-05 unverdicted novelty 3.0

    AI safety literature overlooks cognitive deskilling and addiction risks from generative AI despite public concern about them.