pith. machine review for the scientific record. sign in

arxiv: 1805.01109 · v2 · submitted 2018-05-03 · 💻 cs.AI

Recognition: unknown

AGI Safety Literature Review

Authors on Pith no claims yet
classification 💻 cs.AI
keywords safetyreviewwillaccessiblealongartificialbeenbenefits
0
0 comments X
read the original abstract

The development of Artificial General Intelligence (AGI) promises to be a major event. Along with its many potential benefits, it also raises serious safety concerns (Bostrom, 2014). The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety. A significant number of safety problems for AGI have been identified. We list these, and survey recent research on solving them. We also cover works on how best to think of AGI from the limited knowledge we have today, predictions for when AGI will first be created, and what will happen after its creation. Finally, we review the current public policy on AGI.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Sustaining AI safety: Control-theoretic external impossibility, intrinsic necessity, and structural requirements

    cs.AI 2026-05 unverdicted novelty 6.0

    External control strategies are structurally impossible for sustaining AI safety beyond bounded capability thresholds; any remaining viable strategies must be intrinsic with stable safety-compatible objectives.