Existential Risk from Artificial General Intelligence (AGI)
(Redirected from Existential Risk From Artificial General Intelligence)
Jump to navigation
Jump to search
An Existential Risk from Artificial General Intelligence (AGI) is a risk hypothesis that emerges from the development and deployment of artificial general intelligence, potentially leading to human extinction or irreversible global catastrophic risks.
- Context:
- It can be associated with a Likelihood for Existential Risk from AGI.
- It can be connected with Superintelligences become uncontrollable or misaligned with human values.
- It can (often) involve concerns about the AI Control Problem, where controlling a superintelligent machine or aligning it with human-compatible values may be challenging.
- It can be accelerated by an Intelligence Explosion, where an AI could recursively improve itself exponentially.
- It can be a subject of debate among Scientists, Technologists, and Policy Makers, with varying opinions on its likelihood and severity.
- It can lead to calls for proactive AI Safety Research, Ethical AI Development, and Global AI Regulation.
- …
- Example(s):
- as proposed in (Bostrom, 2014).
- …
- Counter-Example(s):
- A AGI Utopia Hypothesis.
- A Pandemic Risk (for a pandemic).
- A Nuclear War Risk (of nuclear war).
- …
- See: Artificial General Intelligence, Human Extinction, Global Catastrophic Risk, Superintelligence, AI Takeover.
References
2024
- (Yampolskiy, 2024) ⇒ Roman Yampolskiy. (2024). “AI: Unexplainable, Unpredictable, Uncontrollable.” CRC Press. ISBN:9781032576268
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence Retrieved:2023-12-15.
- Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe. One argument goes as follows: human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres[1] called for an increased focus on global AI regulation. Two sources of concern stem from the problems of AI control and alignment: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as that would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[2][3] [4] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI that is more intelligent than its creators might be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers and society at large to control.[2][3] Empirically, examples like AlphaZero teaching itself to play Go show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such systems do not involve altering their fundamental architecture.
2019
- (Russell, 2019) ⇒ Stuart Russell. (2019). “Human Compatible: Artificial Intelligence and the Future of the Human Mind." Penguin. ASIN:B07N5J5FTS
- NOTE: It emphasizes understanding human cognition and values to create safe AI and beneficial AI, advocating for a "Human-In-The-Loop AI Development.
2019
- (Netflix, 2019) ⇒ "The Age of AI." (2019). Netflix Documentary Series. en.wikipedia.org
- NOTE: This documentary series explores the potential and risks of AI, featuring interviews with leading AI experts.
2018
- (Yudkowsky, 2018) ⇒ Eliezer Yudkowsky. (2018). “The Alignment Problem: Machine Learning and Human Values." www.amazon.com.au
- NOTE: Delving into the challenges of aligning AGI's goals with human values, this book argues for its critical importance in safe and beneficial AI development.
- (Sotala, 2018) ⇒ K. Sotala. (2018). “Disjunctive Scenarios of Catastrophic AI Risk.” In: Artificial Intelligence Safety and Security.
- NOTE: It explores different kinds of AI risk scenarios, particularly focusing on disjunctive scenarios of catastrophic AI risk.
2014
- (Bostrom, 2014) ⇒ Nick Bostrom. (2014). “Superintelligence: Paths, Dangers, Strategies." Oxford University Press. ISBN:978-0199678112. www.amazon.ca
- NOTE: This seminal work explores the potential of AGI surpassing human intelligence and the existential risks it could pose.
2008
- (Yudkowsky, 2008) => Eliezer Yudkowsky. (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In: Global Catastrophic Risks, 1.