AGI Ordinal Performance Measure
(Redirected from AGI Autonomy Categorization)
Jump to navigation
Jump to search
A AGI Ordinal Performance Measure is a AGI performance measure that defines the levels of capabilities and generality for Artificial General Intelligence (AGI) systems.
- Context:
- It can (typically) classify AGI Systems based on their levels of capabilities, from Level 0 AGI (No AGI) to Level 5 AGI (Superhuman AGI).
- It can (often) be used to evaluate the progress and potential of AGI research in various domains.
- It can range from being a simple classification system to a detailed evaluation framework for assessing AGI capabilities.
- It can be developed and maintained by research institutions, standardization organizations, and AI industry stakeholders.
- It can inform policy-making and regulatory frameworks for AGI development and deployment.
- ...
- Example(s):
- the Levels of AGI Framework proposed by Morris et al., 2023.
- the OpenAI AGI Charter for guiding the development of general AI.
- the Anthropic Responsible Scaling Policy for managing AGI risks.
- ...
- Counter-Example(s):
- a Narrow AI Categorization Model, which categorizes AI systems based on specialized tasks.
- a Non-Generalized AI Evaluation System, which assesses AI systems without considering their generality.
- ...
- See: Artificial General Intelligence, AGI Performance and Generality, AGI Ontology and Taxonomy, Human-AI Interaction Paradigms, AGI Risk Assessment
References
2023
- (Morris et al., 2023) ⇒ Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. (2023). “Levels of AGI: Operationalizing Progress on the Path to AGI.” doi:10.48550/arXiv.2311.02462
- NOTE: The paper proposes a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors, introducing levels of AGI performance, AGI generality, and AGI autonomy to compare models, assess risks, and measure progress.
2018
- OpenAI. "OpenAI Charter." April 2018. URL: https://openai.com/charter
- QUOTE: The OpenAI Charter outlines the guiding principles for the development of artificial general intelligence, emphasizing the need to ensure that AGI benefits all of humanity and to manage risks associated with AGI deployment.
2023
- Anthropic. "Responsible Scaling Policy." September 2023. URL: https://www.anthropic.com/responsible-scaling-policy
- QUOTE: The Responsible Scaling Policy by Anthropic provides a levels-based approach to defining the risk associated with AI systems, identifying dangerous capabilities and containment measures for safe and responsible AI deployment.