Artificial Intelligence (AI) Risk
(Redirected from AI Risk)
Jump to navigation
Jump to search
An Artificial Intelligence (AI) Risk is a information technology risk arising from the development and deployment of AI systems.
- Context:
- It range from being a Present AI Risk to being a Future AI Risk (from frontier AI system)s.
- It can be addressed with AI Risk Mitigation Strategy (which likely includes AI governance, and AI safety measures).
- It can range from being a Wellbeing AI Risk (such as from social media addiction from recommender systems) to being an Catastrophic AI Risk (such as existential AGI risk).
- …
- Example(s):
- Catastrophic AI Risks:
- AI Loss of Control risks, where AI systems act unpredictably.
- Existential Risk from AGI, where advanced AGI poses a threat to humanity's survival.
- ...
- Societal AI Risks:
- Bias and Discrimination from AI, where AI systems perpetuate or amplify societal biases.
- Job Displacement from AI, where AI automation leads to significant job losses.
- AI-based Misinformation Risk, where AI generates and spreads false information.
- AI Weaponization risks, where AI is used in military applications.
- ...
- Technological AI Risks:
- Security Vulnerabilities in AI Systems, where AI systems are susceptible to hacking.
- Ethical Misalignment in AI Applications, where AI actions conflict with human values.
- Environmental Impact of AI Technologies, where AI technologies contribute to environmental degradation.
- ...
- Catastrophic AI Risks:
- Counter-Example(s):
- See: Global Catastrophic Risk, Superintelligence, AI Takeover, Ethical AI, Technological Singularity.
References
2024
- (Bengio, Hinton et al., 2024) ⇒ Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. (2024). “Managing Extreme AI Risks Amid Rapid Progress.” In: Science. 10.1126/science.adn0117 doi: 10.1126/science.adn0117
2023
- Philippe Beaudoin. (2023). facebook post 2023-08-12
https://facebook.com/story.php?story_fbid=pfbid04erNEX7MWC5PgMpiUrYRQccWbHQ7FTkkvntyxMpkJ29t3puCD13BVtwjoSnPWoD1l&id=679472999
- NOTE: It explores the urgency of focusing on AI safety and regulation. The author emphasizes that Artificial Intelligence (AI) is on the verge of becoming a highly impactful technology with both promises and dangers. Acknowledging that current, primitive AI systems are already causing harm, the author stresses the need to anticipate and mitigate risks associated with more advanced AI. The post outlines consensus points such as the existing harms that need regulation, the necessity of investing in fundamental aspects of AI research, and recognizing unknown risks linked to advanced AI. It also acknowledges ongoing debates regarding the nature of future risks, communication strategies, the balance between focusing on present and future harms, and the contention over whether future AI should be kept more "closed" or "open.” The author concludes by calling for prioritized attention to AI safety through strong regulation and substantial research investments, aiming to guide the development of AI towards a safer and happier future.
2019
- (Perry & Uuk, 2019) ⇒ B. Perry, and R. Uuk. (2019). “AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.” In: Big Data and Cognitive Computing.
- NOTE: It discusses the considerations for integrating AI risk policy into the broader framework of governance, focusing on strategies for reducing AI risk.
2022
- (Piorkowski et al., 2022) ⇒ D. Piorkowski, M. Hind, and J. Richards. (2022). “Quantitative AI Risk Assessments: Opportunities and Challenges.” In: arXiv preprint arXiv:2209.06317.
- NOTE: It reports on the current state of quantitative AI risk assessment and emphasizes the need for suitable metrics.
2015
- (Scherer, 2015) ⇒ Matthew U. Scherer. (2015). “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harv. JL & Tech. 29