Artificial Super-Intelligence (ASI) Risk Measure
(Redirected from Existential Risk from AGI)
Jump to navigation
Jump to search
An Artificial Super-Intelligence (ASI) Risk Measure is a risk assessment framework that evaluates artificial super-intelligence (ASI) existential threats and artificial super-intelligence (ASI) catastrophic scenarios.
- AKA: ASI Existential Risk Assessment, Superintelligence Risk Metric.
- Context:
- It can quantify ASI Emergence Probability through artificial super-intelligence (ASI) capability forecasting models.
- It can assess ASI Control Problem Severity through artificial super-intelligence (ASI) alignment difficulty metrics.
- It can evaluate ASI Impact Magnitude through artificial super-intelligence (ASI) catastrophic outcome analysises.
- It can incorporate ASI Timeline Estimates through artificial super-intelligence (ASI) development trajectory models.
- It can integrate ASI Capability Assessments through artificial super-intelligence (ASI) intelligence explosion scenarios.
- ...
- It can often combine Expert Judgment Methods with artificial super-intelligence (ASI) formal risk models.
- It can often utilize Scenario Analysis Techniques with artificial super-intelligence (ASI) failure modes.
- It can often employ Probabilistic Risk Assessments with artificial super-intelligence (ASI) safety thresholds.
- It can often apply Game-Theoretic Models to artificial super-intelligence (ASI) strategic interactions.
- ...
- It can range from being a Qualitative ASI Risk Measure to being a Quantitative ASI Risk Measure, depending on its artificial super-intelligence (ASI) risk formalization level.
- It can range from being a Near-Term ASI Risk Measure to being a Long-Term ASI Risk Measure, depending on its artificial super-intelligence (ASI) temporal scope.
- It can range from being a Technical ASI Risk Measure to being a Sociotechnical ASI Risk Measure, depending on its artificial super-intelligence (ASI) risk domain coverage.
- ...
- It can inform ASI Safety Research Priority through artificial super-intelligence (ASI) risk severity rankings.
- It can guide ASI Development Policy through artificial super-intelligence (ASI) risk threshold determinations.
- It can support ASI Governance Framework through artificial super-intelligence (ASI) risk monitoring protocols.
- It can enable ASI Risk Communication through artificial super-intelligence (ASI) risk visualization methods.
- It can facilitate ASI Safety Investment Decision through artificial super-intelligence (ASI) risk-benefit analysises.
- ...
- Example(s):
- Formal ASI Risk Measures, such as:
- Bostrom's Existential Risk Framework (2014), quantifying artificial super-intelligence (ASI) extinction risks.
- Yampolskiy's Uncontrollability Metric (2024), assessing artificial super-intelligence (ASI) control failure probability.
- MIRI Risk Assessment Model, evaluating artificial super-intelligence (ASI) alignment failure scenarios.
- Survey-Based ASI Risk Measures, such as:
- AI Expert Survey Risk Assessment (2022), aggregating artificial super-intelligence (ASI) catastrophic risk estimates.
- FLI Expert Poll Method, capturing artificial super-intelligence (ASI) timeline uncertainty.
- Metaculus ASI Prediction Market, forecasting artificial super-intelligence (ASI) emergence probability.
- Scenario-Based ASI Risk Measures, such as:
- Intelligence Explosion Risk Model, analyzing artificial super-intelligence (ASI) recursive improvement risks.
- Value Lock-In Risk Assessment, evaluating artificial super-intelligence (ASI) goal preservation failures.
- Infrastructure Profusion Scenario Analysis, measuring artificial super-intelligence (ASI) resource consumption risks.
- ...
- Formal ASI Risk Measures, such as:
- Counter-Example(s):
- AGI Capability Benchmark, which measures artificial general intelligence performance rather than artificial super-intelligence (ASI) risk levels.
- AI Safety Metric, which assesses narrow AI system safety without artificial super-intelligence (ASI) existential considerations.
- Machine Learning Robustness Test, which evaluates model reliability without artificial super-intelligence (ASI) catastrophic scenarios.
- General Risk Assessment Framework, which lacks artificial super-intelligence (ASI)-specific risk factors.
- See: Existential Risk Assessment, AI Safety Research, Superintelligence Control Problem, AI Alignment Theory, Global Catastrophic Risk Analysis.
References
2024
- (Yampolskiy, 2024) ⇒ Roman Yampolskiy. (2024). “AI: Unexplainable, Unpredictable, Uncontrollable.” CRC Press. ISBN:9781032576268
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence Retrieved:2023-12-15.
- Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe. One argument goes as follows: human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres[1] called for an increased focus on global AI regulation. Two sources of concern stem from the problems of AI control and alignment: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as that would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[2][3] [4] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI that is more intelligent than its creators might be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers and society at large to control.[2][3] Empirically, examples like AlphaZero teaching itself to play Go show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such systems do not involve altering their fundamental architecture.
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs named:12
- ↑ Jump up to: 2.0 2.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedaima
- ↑ Jump up to: 3.0 3.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedyudkowsky-global-risk
- ↑ , cited in
2019
- (Russell, 2019) ⇒ Stuart Russell. (2019). “Human Compatible: Artificial Intelligence and the Future of the Human Mind." Penguin. ASIN:B07N5J5FTS
- NOTE: It emphasizes understanding human cognition and values to create safe AI and beneficial AI, advocating for a "Human-In-The-Loop AI Development.
2019
- (Netflix, 2019) ⇒ "The Age of AI." (2019). Netflix Documentary Series. en.wikipedia.org
- NOTE: This documentary series explores the potential and risks of AI, featuring interviews with leading AI experts.
2018
- (Yudkowsky, 2018) ⇒ Eliezer Yudkowsky. (2018). “The Alignment Problem: Machine Learning and Human Values." www.amazon.com.au
- NOTE: Delving into the challenges of aligning AGI's goals with human values, this book argues for its critical importance in safe and beneficial AI development.
- (Sotala, 2018) ⇒ K. Sotala. (2018). “Disjunctive Scenarios of Catastrophic AI Risk.” In: Artificial Intelligence Safety and Security.
- NOTE: It explores different kinds of AI risk scenarios, particularly focusing on disjunctive scenarios of catastrophic AI risk.
2014
- (Bostrom, 2014) ⇒ Nick Bostrom. (2014). “Superintelligence: Paths, Dangers, Strategies." Oxford University Press. ISBN:978-0199678112. www.amazon.ca
- NOTE: This seminal work explores the potential of AGI surpassing human intelligence and the existential risks it could pose.
2008
- (Yudkowsky, 2008) => Eliezer Yudkowsky. (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In: Global Catastrophic Risks, 1.