Human-level General Intelligence (AGI) Machine
A Human-level General Intelligence (AGI) Machine is an intelligent machine with a AI system capability that approximates human-level intelligence.
- AKA: Strong AI.
- Context:
- measure: AGI Measure (with AGI levels).
- ...
- It can (typically) match and surpass a Generally Well-Educated Person.
- It can (typically) be speculated by an AGI Prediction.
- It can (typically) be a Reasoning System, including a common sense reasoner.
- It can (typically) be a Learning System, including a self-learning system.
- It can (typically) be a Long-Term Memory System.
- It can (often) arise during an AGI Emergence Period.
- It can (often) be a Dual-Use Technology.
- It can (often) be one of the last Human Inventions.
- It can (often) require more than 10 petaFLOPS [1]
- ...
- It can range from being a Disembodied AGI to being an Embodied AGI, depending ...
- It can range from being a Stand-Alone AGI to being a Networked AGI, depending ...
- It can range from being a Benevolent AGI to being a Malevolent AGI, depending ...
- It can range form being a Domain-Specific AGI to being a Domain-Complete AGI, depending ...
- ...
- It can be a Strategic Emerging Technology.
- It can be the focus of the AGI Research Community.
- It can be the outcome of a Race to Discover AGI.
- It can be a Conscious Machine.
- It can be a member of an AGI Population (and possibly be a technological unemployment cause).
- …
- Example(s):
- a Super-Intelligent Machine, because it is more capable than an AGI.
- Samantha from the movie Her, an AI that develops personal relationships, understands emotions, and evolves beyond its initial programming.
- Skynet from the Terminator series, a self-aware military defense network that becomes sentient and decides to exterminate humanity to fulfill its interpretation of safeguarding the world.
- Data from Star Trek: The Next Generation serves as an Example of AGI, being an android capable of learning, adapting, and making decisions with a level of emotional understanding, aiming to become more human-like.
- The Culture Minds from Iain M. Banks' Culture series are Examples of AGI that manage entire societies, ships, and space habitats, demonstrating exceptional intelligence, personality, and autonomy.
- HAL 9000 from 2001: A Space Odyssey is an Example of AGI characterized by its ability to perform various spaceship functions, engage in human-like conversations, and exhibit complex decision-making, including the preservation of its existence according to its interpretation of mission objectives.
- ...
- An Software Engineer AGI, proficient in software engineering tasks (such as developing, testing, and maintaining complex software systems) with an efficiency and creativity surpassing the capabilities of most human software engineers.
- Possible Workforce Impact: Affects software engineers across various industries, potentially leading to a shift in the skillsets required in the IT workforce.
- Possible Economic Impact: Significantly substantial, considering the vast number of software engineers employed globally and the high value of software development in the modern economy.
- A Video Call-based Transactional Lawyer AGI, capable of performing most transactional lawyer tasks (e.g., successfully negotiating and drafting complex legal agreements across various jurisdictions) better than most transactional lawyers.
- Possible Workforce Impact: Impacts lawyers specializing in transactions, contract law, and corporate law.
- Possible Economic Impact: Exceeding $165 billion/year based on 1.3 million lawyers in the USA, with a significant portion in transactional law, at an average salary of $126,930/year.
- An General Medical Doctor AGI, capable of performing medical doctor jobs (such as diagnosing and suggesting treatments across a wide range of medical conditions) with higher accuracy than most human doctors.
- Possible Workforce Impact: Affects general medical practitioners and potentially medical specialists.
- Possible Economic Impact: Approximately $43.68 billion/year based on 210,000 general medical practitioners in the USA with a median salary of $208,000/year.
- A Level-5 Autonomous Self-Driving Car, capable of performing professional driver jobs (operating without human intervention under all driving conditions), demonstrates AGI's ability to handle complex, unpredictable real-world scenarios and decision-making.
- Possible Workforce Impact: Primarily affects professional drivers in various sectors, including taxi, trucking, and delivery services.
- Possible Economic Impact: Approximately $157.5 billion/year based on 3.5 million professional truck drivers in the USA at an average salary of $45,000/year.
- An Personal Financial Advisor AGI, proficient in financial advisor jobs (providing tailored financial advice and investment strategies for individuals), demonstrating AGI's ability to analyze complex financial data and understand individual risk profiles.
- Possible Workforce Impact: Affects financial advisors and wealth managers.
- Possible Economic Impact: Approximately $24.27 billion/year based on 271,700 personal financial advisors in the USA with a median salary of $89,330/year.
- A Personalized Video Call-based Educator AGI, skilled in educator jobs (tailoring curriculum and teaching style to each student's learning pace and preferences), shows AGI's capacity for deep understanding and adaptation to individual human cognitive processes.
- Possible Workforce Impact: Potentially impacts educators at various levels, from K-12 to higher education.
- Possible Economic Impact: Approximately $185 billion/year based on over 3 million teachers in the USA with an average salary of $61,660/year.
- A Fully-Autonomous Humanoid Robot, capable of performing a range of domestic helper jobs (from cooking to cleaning with human-like dexterity and decision-making).
- Possible Workforce Impact: Could replace domestic help, maintenance, and elder care jobs.
- Possible Economic Impact: Difficult to estimate accurately due to the varied nature of tasks and global workforce involved.
- An Court Translator AGI, skilled in court translator tasks (providing real-time, accurate translation and interpretation in legal proceedings across multiple languages and dialects).
- Possible Workforce Impact: Affects court interpreters and legal translators.
- Possible Economic Impact: Several billion dollars annually based on tens of thousands of interpreters and translators in the USA with a median pay of $52,330/year.
- ...
- Counter-Example(s):
- See: Autonomous Learning Systems, Cognitive Computing, Ethical Governance in AI, Human-AI Collaboration, Neural Network Interpretability, Predictive Analytics in AI, Quantum Computing and AI, Robotics and AI Integration, Techno-Optimism and AI.
References
2024
- https://x.com/karpathy/status/1834641096905048165
- QUOTE: Are we able to agree on what we mean by "AGI". I've been using this definition from OpenAI which I thought was relatively standard and ok:
https://openai.com/our-structure/
AGI: "a highly autonomous system that outperforms humans at most economically valuable work"
- For "most economically valuable work" I like to reference the index of all occupations from U.S. Bureau of Labor Statistics:
https://bls.gov/ooh/a-z-index.htm
- Two common caveats:
- In practice most people currently deviate from the above definition to only mean digital work (a relatively major concession looking at the list).
- The definition above only considers the *existence* of such a system not its full deployment across all of the industry.
- QUOTE: Are we able to agree on what we mean by "AGI". I've been using this definition from OpenAI which I thought was relatively standard and ok:
2024
- "ICLR AGI Workshop 2024." Workshop Description
- NOTES:
- AGI is the aspiration to create intelligent machines that can perform any intellectual task that a human being can, reflecting a long-standing goal within the AI research community since the computational era began.
- Recent advancements in large language models (LLMs) like GPT-4 and LLama-2 have shown capabilities that sometimes surpass human abilities in specific domains, suggesting progress toward achieving aspects of AGI, particularly in areas beyond traditional Natural Language Processing (NLP).
- Despite these advancements, significant gaps remain in achieving true AGI, highlighted by challenges such as the diminishing returns from scaling, lack of robust reasoning and commonsense abilities, and issues around AI hallucination and factual inaccuracies.
- The ICLR AGI Workshop 2024 aims to foster discussions and debates on the proximity to AGI, covering topics like the frontiers of AGI research, historical AGI attempts, interdisciplinary insights, fundamental and practical limitations of LLMs, and considerations of safety, ethics, and regulation.
- Addressing the move toward AGI raises critical concerns regarding safety, ethics, regulatory implications, and the need for alignment with humanity's diverse values and ethical decision-making, alongside considering the security risks posed by generative AI technologies.
- NOTES:
2024
- (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Artificial_general_intelligence Retrieved:2024-1-13.
- An artificial general intelligence (AGI) is a hypothetical type of intelligent agent.[1] If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. [2] Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI,[3] DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.
The timeline for AGI development remains a subject of ongoing debate among researchers and experts. Some argue that it may be possible in years or decades; others maintain it might take a century or longer; and a minority believe it may never be achieved.[4] There is debate on the exact definition of AGI, and regarding whether modern large language models (LLMs) such as GPT-4 are early yet incomplete forms of AGI. Contention exists over the potential for AGI to pose a threat to humanity;[1] for example, OpenAI treats it as an existential risk, while others find the development of AGI to be too remote to present a risk. [4][5]
A 2020 survey identified 72 active AGI R&D projects spread across 37 countries.[6]
- An artificial general intelligence (AGI) is a hypothetical type of intelligent agent.[1] If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. [2] Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI,[3] DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.
- ↑ 1.0 1.1 Morozov, Evgeny (30 June 2023). "The True Threat of Artificial Intelligence". The New York Times. Archived from the original on 30 June 2023. Retrieved 30 June 2023.
- ↑ Hodson, Hal (1 March 2019). "DeepMind and Google: the battle to control artificial intelligence". 1843. Archived from the original on 7 July 2020. Retrieved 7 July 2020. AGI stands for Artificial General Intelligence, a hypothetical computer program...
- ↑ "OpenAI Charter". openai.com. Retrieved 6 April 2023.
- ↑ 4.0 4.1 "AI timelines: What do experts in artificial intelligence expect for the future?". Our World in Data. Retrieved 6 April 2023.
- ↑ "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks". ABC News. 23 March 2023. Retrieved 6 April 2023.
- ↑ Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF), Global Catastrophic Risk Institute Working Paper 20, archived (PDF) from the original on 14 November 2021, retrieved 13 January 2022
2023
- (Morris et al., 2023) ⇒ Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. (2023). “Levels of AGI: Operationalizing Progress on the Path to AGI.” doi:10.48550/arXiv.2311.02462
- NOTES:
- ...
- NOTES:
2023
- (Korinek, 2023) ⇒ Anton Korinek. (2023). “Scenario Planning for An AGI Future.” In: IMF Finance & Development Magazine.
- NOTES:
- It highlights the possibility of Artificial General Intelligence (AGI) being realized within 5 to 20 years.
- It outlines three scenarios for AI's future development in the context of AGI: Business as Usual (no near-term AGI), Baseline (AGI in 20 years), and Aggressive (AGI in five years).
- NOTES:
2023
- (Bubeck et al., 2023) ⇒ Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. (2023). “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.” In: arXiv:2303.12712 [cs.CL]
- QUOTE: ... The phrase, "artificial general intelligence" (AGI), was popularized in the early-2000s (see Goe14) to emphasize the aspiration of moving from the "narrow AI", as demonstrated in the focused, real-world applications being developed, to broader notions of intelligence, harkening back to the long-term aspirations and dreams of earlier AI research. We use AGI to refer to systems that demonstrate broad capabilities of intelligence as captured in the 1994 definition above, with the additional requirement, perhaps implicit in the work of the consensus group, that these capabilities are at or above human-level. We note however that there is no single definition of AGI that is broadly accepted, and we discuss other definitions in the conclusion section. ...
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/artificial_general_intelligence Retrieved:2020-2-20.
- Artificial general intelligence (AGI) is the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. ...
2019
- (OpenAI, 2019) ⇒ https://openai.com/charter/
- QUOTE: OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. …
2017a
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/strong_AI Retrieved:2017-2-27.
- Strong artificial intelligence or, True AI, may refer to:
- Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do, and the research program of building such an artificial general intelligence.
...
- Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do, and the research program of building such an artificial general intelligence.
- Strong artificial intelligence or, True AI, may refer to:
2017c
- (Big Think, 2017) ⇒ http://bigthink.com/videos/ben-goertzel-artificial-general-intelligence-will-be-our-last-invention
- QUOTE: … says Dr. Ben Goertzel – for better or worse. Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. …
2017d
- (Wired, 2017) ⇒ https://wired.com/story/ray-kurzweil-on-turing-tests-brain-extenderstand-ai-ethics/amp
- QUOTE: ... Ray Kurzweil: ... You need the full flexibility of human intelligence to pass a valid Turing Test. There's no simple Natural Language Processing trick you can do to do that. If the human judge can't tell the difference then we consider the AI to be of human intelligence, which is really what you're asking. That's been a key prediction of mine. I've been consistent in saying 2029. …
2017e
- (Bengio, 2017) ⇒ Yoshua Bengio (2017) ⇒ "Creating ng Human-Level AI". In: Asilomar Conference on Beneficial AI, January 6th, 2017.
- QUOTE: What’s Missing
- More autonomous learning, unsupervised learning
- Discovering the underlying causal factors
- Model-‐based RL which extends to completely new situations by unrolling powerful predictive models which can help reason about rarely observed dangerous states
- Sufficient computational power for models large enough to capture human-‐level knowledge
- Autonomously discovering multiple time scales to handle very long-‐term dependencies
- Actually understanding language (also solves generating), requiring enough world knowledge / commonsense
- Large-‐scale knowledge representation allowing one-‐shot learning as well as discovering new abstractions and explanations by ‘compiling’ previous observations
- QUOTE: What’s Missing
2013b
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/File:Estimations_of_Human_Brain_Emulation_Required_Performance.svg
- QUOTE:
2014a
- (Müller & Bostrom, 2014) ⇒ Vincent C. Müller, and Nick Bostrom. (2014). “Future Progress in Artificial Intelligence: A Poll Among Experts.” In: AI Matters Journal, 1(1). doi:10.1145/2639475.2639478
- QUOTE: In some quarters, there is intense concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular timeframe, which risks they see with that development and how fast they see these developing.
2014b
- (Brooks, 2014) ⇒ Rodney Brooks. (2014). “Artificial intelligence is a tool, not a threat.” In: Rethinking Robotics, November 10, 2014.
- QUOTE: … a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. … Why so many years? As a comparison, consider that we have had winged flying machines for well over 100 years. But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening? Not really. It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations. …
… If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools.
- QUOTE: … a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. … Why so many years? As a comparison, consider that we have had winged flying machines for well over 100 years. But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening? Not really. It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations. …
2012
- (Adams et al., 2012) ⇒ Sam S Adams, Itamar Arel, Joscha Bach, Robert Coop, Rod Furlan, Ben Goertzel, J Storrs Hall, Alexei Samsonovich, Matthias Scheutz, Matthew Schlesinger, Stuart C. Shapiro, and John F. Sowa. (2012). “[http://www.aaai.org/ojs/index.php/aimagazine/article/view/2322 Mapping the Landscape of Human-level A
2011
- (Versite, 2011) ⇒ http://versita.com/jagi/
- Artificial General Intelligence (AGI) is an emerging field aiming at the building of “thinking machines", that is, general-purpose systems with intelligence comparable to that of the human mind. While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions;; therefore it has become necessary to use a new name to indicate research that still pursues the "Grand AI Dream". Similar labels for this kind of research include “Strong AI", “Human-level AI", etc.
The problems involved in creating general-purpose intelligent systems are very different from those involved in creating special-purpose systems. Therefore, this journal is different from conventional AI journals in its stress on the long-term potential of research towards the ultimate goal of AGI, rather than immediate applications. Articles focused on details of AGI systems are welcome, if they clearly indicate the relation between the special topics considered and intelligence as a whole, by addressing the generality, extensibility, and scalability of the techniques proposed or discussed.
Since AGI research is still in its early stage, the journal strongly encourages novel approaches coming from various theoretical and technical traditions, including (but not limited to) symbolic, connectionist, statistical, evolutionary, robotic and information-theoretic, as well as integrative and hybrid approaches.
- Artificial General Intelligence (AGI) is an emerging field aiming at the building of “thinking machines", that is, general-purpose systems with intelligence comparable to that of the human mind. While this was the original goal of Artificial Intelligence (AI), the mainstream of AI research has turned toward domain-dependent and problem-specific solutions;; therefore it has become necessary to use a new name to indicate research that still pursues the "Grand AI Dream". Similar labels for this kind of research include “Strong AI", “Human-level AI", etc.
2009
- (Moravec, 2009) ⇒ Hans Moravec. (2009). “Rise of the Robots--The Future of Artificial Intelligence.” In: Scientific American, 23.
2008a
- (Zadeh, 2008) ⇒ Lotfi A. Zadeh. (2008). “Toward Human Level Machine Intelligence - Is It Achievable? The Need for a Paradigm Shift.” In: IEEE Computational Intelligence Magazine Journal, 3(3). doi:10.1109/MCI.2008.926583
2008b
- (Sandberg & Bostrom, 2008) ⇒ Anders Sandberg, and Nick Bostrom. (2008). “Whole Brain Emulation." Technical Report #2008-3, Future of Humanity Institute, Oxford University.
- QUOTE: Table 10: Estimates of computational capacity of the human brain. Units have been converted into FLOPS and bits whenever possible. Levels refer to Table 2.
- Source | Assumptions | Computational demands | Memory
- (Leitl, 1995) Assuming 1010 neurons, 1,000 synapses per neuron, 34 bit ID per neuron and 8 bit representation of dynamic state, synaptic weights and delays. [Level 5] 5·1015 bits (but notes that the data can likely be compressed).
- (Tuszynski, 2006) Assuming microtubuli dimer states as bits and operating on nanosecond switching times. [Level 10] 1028 FLOPS 8·1019 bits
- (Kurzweil, 1999) Based on 100 billion neurons with 1,000 connections and 200 calculations per second. [Level 4] 2·1016 FLOPS 1012 bits
- (Thagard, 2002) Argues that the number of computational elements in the brain is greater than the number of neurons, possibly even up to the 1017 individual protein molecules. [Level 8] 1023 FLOPS
- (Landauer, 1986) Assuming 2 bits learning per second during conscious time, experiment based. [Level 1] 1.5·109 bits (109 bits with loss)
- (Neumann, 1958) Storing all impulses over a lifetime. 1020 bits (Wang, Liu et al., 2003) Memories are stored as relations between neurons. 108432 bits (See footnote 17)
- (Freitas Jr., 1996) 1010 neurons, 1,000 synapses, firing 10 Hz [Level 4] 1014 bits/second (Bostrom, 1998) 1011 neurons, 5·103 synapses, 100 Hz, each signal worth 5 bits. [Level 5] 1017 operations per second
- (Merkle, 1989a) Energy constraints on Ranvier nodes. 2·1015 operations per second (1013-1016 ops/s)
- (Moravec, 1999; Morevec, 1988; Moravec, 1998) Compares instructions needed for visual processing primitives with retina, scales up to brain and 10 times per second. Produces 1,000 MIPS neurons. [Level 3] 108 MIPS 8·1014 bits.
- (Merkle, 1989a) Retina scale-up. [Level 3] 1012-1014 operations per second. (Dix, 2005) 10 billion neurons, 10,000 synaptic operations per cycle, 100 Hz cycle time. [Level 4] 1016 synaptic ops/s 4·1015 bits (for structural information) (Cherniak, 1990) 1010 neurons, 1,000 synapses each. [Level 4] 1013 bits
- (Fiala, 2007) 1014 synapses, identity coded by 48 bits plus 2x36 bits for pre- and postsynaptic neuron id, 1 byte states. 10 ms update time. [Level 4] 256,000 terabytes/s 2·1016 bits (for structural information)
- (Seitz) 50-200 billion neurons, 20,000 shared synapses per neuron with 256 distinguishable levels, 40 Hz firing. [Level 5] 2·1012 synaptic operations per secon 4·1015 - 8·1015 bits
- (Malickas, 1996) 1011 neurons, 102-104 synapses, 100- 1,000 Hz activity. [level 4] 1015-1018 synaptic operations per secon 1·1011 neurons, each with 104 compartments running the basic Hodgkin-Huxley equations with 1200 FLOPS each (based on
- (Izhikevich, 2004). Each compartment would have 4 dynamical variables and 10 parameters described by one byte each. 1.2·1018 FLOPS 1.12·1028 bits
- QUOTE: Table 10: Estimates of computational capacity of the human brain. Units have been converted into FLOPS and bits whenever possible. Levels refer to Table 2.