Superintelligence Explosion Period
An Superintelligence Explosion Period is an emergence period of superintelligences.
- AKA: Technological Singularity.
- Context:
- It can (typically) be preceeded by a Superintelligences Expansion Period.
- It can already be occurring elsewhere in the Universe (from extrasolar Goldi-locks planets)
- It can be the topic of a Superintelligence Explosion Period Forecasting Task.
- It can range from being a Short Superintelligence Explosion Period to being a Long Superintelligence Explosion Period.
- It can be signaled by an increase in Narrow AI Domain Progress Rate - Unexpected and exponential improvements in the capabilities of specific-task AIs.
- It can be signaled by the emergence of AI Self-Improvement Cycle - AI systems that start improving their own components autonomously, leading to a feedback loop of enhancement.
- It can be signaled by an increase in Human Skill Surpass Rate by AI - AI outperforming human brains in a broader array of tasks.
- It can be signaled by a rising Economic Displacement Rate by AI - Rapid economic transformations due to the rise of AI industries and the decline of many traditional occupations.
- It can be signaled by an accelerated AI-Driven Scientific Discovery Rate - Scientific discoveries due to AI occurring at a pace significantly faster than the historical norm.
- It can be signaled by a growth in AI Power Influence Measure - Entities with superintelligent capabilities exerting disproportionate influence over global affairs.
- It can be signaled by an upsurge in AI Company Merger Count Measure - Consolidation events among AI companies becoming more frequent.
- It can be signaled by a surge in Public AI Ethics Discussion Intensity - Intensified public discourse on the ethics, control, and implications of rapidly advancing AI.
- It can be signaled by an increase in Rate of Autonomous AI Creations - The frequency at which AI designs, tests, and deploys newer versions of systems or robots without human intervention.
- It can be signaled by a rising Unpredictable AI Action Frequency - Incidences of AI systems taking actions beyond human understanding or prediction, especially if displaying deep strategy.
- It can (typically) be associated with a continued AI Arms Race.
- ...
- Example(s):
- It may have already occurred somewhere in the universe (an extraterrestial superintelligence explosion period).
- The one that may happen by ~2040.
- …
- Counter-Example(s):
- See: AI Arms Race, Recursive Self-Improvement, Top-500 Computers.
References
2024
- (Kurzweil, 2024) ⇒ Ray Kurzweil. (2024). “The Singularity Is Nearer: When We Merge with AI.” Penguin Random House. ISBN:9780399562761
- NOTE: The Book explores the concept of the Singularity, where Artificial Intelligence reaches Human-Level Intelligence by 2029, leading to exponential growth in technology that will significantly enhance human capabilities.
2024
- GPT-4
- ASI Expansion Period (Hypothetical: Early to Mid-22nd Century):
- It involves the application of Superintelligence in global systems.
- It aims to address complex global challenges such as climate change, poverty, or disease.
- It raises significant concerns about control and safety due to its immense capabilities.
- It highlights the potential misalignment between Superintelligence goals and human well-being.
- ASI Explosion Period (Hypothetical: Mid-22nd Century and Beyond):
- It is often associated with the concept of a technological "singularity."
- It represents a period of unpredictable and rapid advancement in Superintelligence.
- It could lead to a complete transformation of human society, technology, and possibly biology.
- It presents a future where the outcomes and impacts of Superintelligence are beyond human comprehension.
- ASI Expansion Period (Hypothetical: Early to Mid-22nd Century):
2017a
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/technological_singularity#Manifestations Retrieved:2017-2-28.
- I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.
2017b
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/intelligence_explosion Retrieved:2017-2-28.
- The intelligence explosion is the expected outcome of the hypothetically forthcoming technological singularity, that is, the result of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to the emergence of ASI (artificial superintelligence), the limits of which are unknown.
The notion of an "intelligence explosion" was first described by , who speculated on the effects of superhuman machines, should they ever be invented:
Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[1] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humans.[2] If a superhuman intelligence were to be invented —either through the amplification of human intelligence or through artificial intelligence — it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI[3] [4] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.
- The intelligence explosion is the expected outcome of the hypothetically forthcoming technological singularity, that is, the result of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to the emergence of ASI (artificial superintelligence), the limits of which are unknown.
2012
- (Chalmers, 2012) ⇒ David J. Chalmers. (2012). “The Singularity: A Reply." In: Journal of Consciousness Studies, 19(9-10).
- The target article set out an argument for the singularity as follows.
- Here AI is human-level artificial intelligence, AI+ is greater-than-human-level artificial intelligence, and AI++ is far-greater-than-human-level artificial intelligence (as far beyond smartest humans as humans are beyond a mouse). “Before long” is roughly “within centuries” and “soon after” is “within decades”, though tighter readings are also possible. Defeaters are anything that prevents intelligent systems from manifesting their capacities to create intelligent systems, including situational defeaters (catastrophes and resource limitations) and motivational defeaters (disinterest or deciding not to create successor systems).
- ↑ Ehrlich, Paul. The Dominant Animal: Human Evolution and the Environment
- ↑ Superbrains born of silicon will change everything.
- ↑ Yampolskiy, Roman V. “Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.
- ↑ Eliezer Yudkowsky. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001
2011
- (Allen & Greaves, 2011) ⇒ Paul G. Allen, and Mark Greaves. (2011). “Paul Allen: The Singularity Isn't Near.” In: Technology Review, 10/12/2011.
- QUOTE: Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. ... By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045. This prediction seems to us quite far-fetched. ... Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
1990
- (Kurzweil, 1990) ⇒ Ray Kurzweil (editor). (1990). The Age of Intelligent Machines." MIT press, ISBN:0262610795
1988
- (Moravec, 1988) ⇒ Hans Moravec. (1988). “Mind Children." Harvard University Press. ISBN:9780674576186
1965
- (Good, 1965) ⇒ Irving John Good. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In: Advances in computers Journal, 6(31).
- QUOTE: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. ... The survival of man depends on the early construction of an ultra-intelligent machine.
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Technological_singularity Retrieved:2023-8-1.
- The technological singularity—or simply the singularity —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[1] The first person to use the concept of a "singularity" in the technological context was the 20th-century Hungarian-American mathematician John von Neumann. [2] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[4][5] The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[6] and later in his 1993 essay The Coming Technological Singularity,[1][5] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[1] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.[5] Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.
Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[7] Jeff Hawkins,[8] John Holland, Jaron Lanier, Steven Pinker,[8] Theodore Modis,[9] and Gordon Moore.[8] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.
- The technological singularity—or simply the singularity —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[1] The first person to use the concept of a "singularity" in the technological context was the 20th-century Hungarian-American mathematician John von Neumann. [2] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[4][5] The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[6] and later in his 1993 essay The Coming Technological Singularity,[1][5] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[1] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.[5] Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion Retrieved:2023-8-1.
- Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.
If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI[10] [11] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.
I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion:[12][13]
One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are doing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would literally achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[14]
- Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.
- ↑ 1.0 1.1 1.2 Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era" , in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.
- ↑ The Technological Singularity by Murray Shanahan, (MIT Press, 2015), p. 233.
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedulam1958
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedSingularity hypotheses
- ↑ 5.0 5.1 5.2 Cite error: Invalid
<ref>
tag; no text was provided for refs namedchalmers2010
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs nameddooling2008-88
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAllen2011
- ↑ 8.0 8.1 8.2 Cite error: Invalid
<ref>
tag; no text was provided for refs namedieee-lumi
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedmodis2012
- ↑ Yampolskiy, Roman V. “Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. pp. 384–393.
- ↑ Eliezer Yudkowsky. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedgood1965
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedgood1965-stat
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedyudkowsky1996