Superintelligent AI System (ASI)
A Superintelligent AI System (ASI) is an intelligence machine that can perform superintelligence tasks (with significantly better performance than an AGI).
- Context:
- It can (typically) exist during a Superintelligences Emergence Period (and then a superintelligence explosion period).
- It can (typically) rapidly evolve by Self-Programming.
- It can (typically) demand more than 1-PFLOPS.
- It can (typically) be attached to a Large Organization (large tech company, large country, large military).
- It can (typically) be developed by an ASI Development Program.
- ...
- It can be a member of a Superintelligences Society (e.g. during a superintelligences expansion period).
- It can range from being a Benevolent Superintelligence to being a Malevolent Superintelligence.
- ...
- Example(s):
- One that may exist elsewhere in the universe.
- One simulated in a Competitive Multi-Player Real-Time ASI Development Game.
- In publications:
- One that approximates the kind described in (Yampolskiy, 2015).
- One that approximates the kind described in (Bostrom, 2014).
- One that approximates the kind described in (Kurzweil, 1999).
- One that approximates the kind described in (Good, 1965).
- …
- Counter-Example(s):
- See: Conscious Reasoning Agent, Intellectual Giftedness, Brain–Computer Interface, Mind Uploading, Supercomputer, Fast SuperAI Takeoff.
References
2024
- (Altman, 2024) ⇒ Sam Altman. (2024). “The Intelligence Age.”
- QUOTE: "It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Superintelligence Retrieved:2023-7-11.
- A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks.Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).
Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.
Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.
- A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/superintelligence Retrieved:2015-5-25.
- A superintelligence, or hyperintelligence, is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent. ...
2015
- (Yampolskiy, 2015) ⇒ R.V. Yampolskiy. (2015). “Artificial superintelligence: a futuristic approach."
- NOTE: It discusses the challenges and considerations surrounding the control and impact of future superintelligence.
2015
- (Davis, 2015) ⇒ E. Davis. (2015). “Ethical guidelines for a superintelligence." In: Artificial Intelligence. Elsevier.
- NOTE: It examines the ethical considerations of developing a superintelligence and its potential societal benefits.
2014
- (Bostrom, 2014) ⇒ Nick Bostrom. (2014). “Superintelligence: Paths, Dangers, Strategies." Oxford University Press. ISBN:978-0199678112
2014
- (Soares & Fallenstein, 2014) ⇒ N. Soares, and B. Fallenstein. (2014). “Aligning superintelligence with human interests: A technical research agenda." In: Machine Intelligence Research Institute (MIRI) Report. Citeseer.
- NOTE: It emphasizes the importance of aligning the objectives and behavior of superintelligence with human interests.
2014
- (Müller & Bostrom, 2014) ⇒ Vincent C. Müller, and Nick Bostrom. (2014). “Future Progress in Artificial Intelligence: A Poll Among Experts.” In: AI Matters Journal, 1(1). doi:10.1145/2639475.2639478
- QUOTE: In some quarters, there is intense concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular timeframe, which risks they see with that development and how fast they see these developing.
2013
- (Barrat, 2013) ⇒ James Barrat. (2013). “Our Final Invention: Artificial Intelligence and the End of the Human Era." St. Martin's Press. ISBN:9780312622374
- QUOTE: Soon, to the scientists’ delight, the terminal displaying the AI’s progress shows the artificial intelligence has surpassed the intelligence level of a human, known as AGI, or artificial general intelligence. Before long, it becomes smarter by a factor of ten, then a hundred. In just two days, it is one thousand times more intelligent than any human, and still improving. The scientists have passed a historic milestone! For the first time humankind is in the presence of an intelligence greater than its own. Artificial superintelligence, or ASI.
2012
- (Chalmers, 2012) ⇒ David J. Chalmers. (2012). “The Singularity: A Reply." In: Journal of Consciousness Studies, 19(9-10).
- QUOTE: The target article set out an argument for the singularity as follows.
- Here AI is human-level artificial intelligence, AI+ is greater-than-human-level artificial intelligence, and AI++ is far-greater-than-human-level artificial intelligence (as far beyond smartest humans as humans are beyond a mouse). ...
2012
- (Vardi, 2012a) ⇒ Moshe Y. Vardi. (2012). “Consequences of Machine Intelligence.” In: The Atlantic, 2012(10).
- QUOTE: The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy's main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians" who think about the day when machine will surpass humans in intelligence.
2002
- (Hibbard, 2002) ⇒ Bill Hibbard. (2002). “Super-Intelligent Machines." Springer US. ISBN:9780306473883
2001
- (Hibbard, 2001) ⇒ Bill Hibbard. (2001). “Super-intelligent Machines.” In: ACM SIGGRAPH Computer Graphics Journal, 35(1). doi:10.1145/377025.377033
- QUOTE: Kurzweil thinks we will develop intelligent machines in about 30 years. ... But I think we will develop intelligent machines within about 100 years.
1999
- (Kurzweil, 1999) ⇒ Ray Kurzweil. (1999). “The Age of Spiritual Machines: When computers exceed human intelligence." Viking Press. ISBN:0-670-88217-8
1998
- (Bostrom, 1998) ⇒ Nick Bostrom. (1998). “[How Long Before Superintelligence].” In: International Journal of Future Studies.
- QUOTE: ... how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented ...
- NOTE: It can predict the timeline for the emergence of superhuman artificial intelligence and discusses varying estimates of its development.
- QUOTE: ... how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.
1988
- (Moravec, 1988) ⇒ Hans Moravec. (1988). “Mind Children." Harvard University Press. ISBN:9780674576186
1965
- (Good, 1965) ⇒ Irving John Good. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In: Advances in computers Journal, 6(31).
- QUOTE: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. ... The survival of man depends on the early construction of an ultra-intelligent machine.