2024 ManagingExtremeAIRisksAmidRapid

From GM-RKB
Jump to navigation Jump to search

Subject Headings: AI Risk, AI Safety, Frontier AI System.

Notes

Cited By

Quotes

Abstract

Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI (1), there is a lack of consensus about how to manage them. Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development (R&D) with proactive, adaptive governance mechanisms for a more commensurate preparation.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2024 ManagingExtremeAIRisksAmidRapidStuart J. Russell
Pieter Abbeel
Geoffrey E. Hinton
Trevor Darrell
Shai Shalev-Shwartz
Yoshua Bengio
Daniel Kahneman
Jeff Clune
Frank Hutter
Yuval Noah Harari
Dawn Song
Jan Brauner
Andrew Yao
Ya-Qin Zhang
Lan Xue
Gillian Hadfield
Tegan Maharaj
Sheila McIlraith
Qiqi Gao
Ashwin Acharya
David Krueger
Anca Dragan
Philip Torr
Atılım Güneş Baydin
Sören Mindermann
Managing Extreme AI Risks Amid Rapid Progress10.1126/science.adn01172024