Superintelligence Explosion Period

From GM-RKB
(Redirected from Technological singularity)
Jump to navigation Jump to search

An Superintelligence Explosion Period is an emergence period of superintelligences.



References

2024

2024

  • GPT-4
    • ASI Expansion Period (Hypothetical: Early to Mid-22nd Century):
      • It involves the application of Superintelligence in global systems.
      • It aims to address complex global challenges such as climate change, poverty, or disease.
      • It raises significant concerns about control and safety due to its immense capabilities.
      • It highlights the potential misalignment between Superintelligence goals and human well-being.
    • ASI Explosion Period (Hypothetical: Mid-22nd Century and Beyond):
      • It is often associated with the concept of a technological "singularity."
      • It represents a period of unpredictable and rapid advancement in Superintelligence.
      • It could lead to a complete transformation of human society, technology, and possibly biology.
      • It presents a future where the outcomes and impacts of Superintelligence are beyond human comprehension.

2017a

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/technological_singularity#Manifestations Retrieved:2017-2-28.
    • I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

2017b

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/intelligence_explosion Retrieved:2017-2-28.
    • The intelligence explosion is the expected outcome of the hypothetically forthcoming technological singularity, that is, the result of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to the emergence of ASI (artificial superintelligence), the limits of which are unknown.

      The notion of an "intelligence explosion" was first described by , who speculated on the effects of superhuman machines, should they ever be invented:

      Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[1] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humans.[2] If a superhuman intelligence were to be invented —either through the amplification of human intelligence or through artificial intelligence — it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI[3] [4] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

2012

  1. Ehrlich, Paul. The Dominant Animal: Human Evolution and the Environment
  2. Superbrains born of silicon will change everything.
  3. Yampolskiy, Roman V. “Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. 384-393.
  4. Eliezer Yudkowsky. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001

2011

1990

1988

1965


2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Technological_singularity Retrieved:2023-8-1.
    • The technological singularity—or simply the singularity —is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[1] The first person to use the concept of a "singularity" in the technological context was the 20th-century Hungarian-American mathematician John von Neumann. [2] Stanislaw Ulam reports in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[3] Subsequent authors have echoed this viewpoint.[4][5] The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[6] and later in his 1993 essay The Coming Technological Singularity,[1][5] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[1] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.[5] Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

      Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[7] Jeff Hawkins,[8] John Holland, Jaron Lanier, Steven Pinker,[8] Theodore Modis,[9] and Gordon Moore.[8] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion Retrieved:2023-8-1.
    • Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.

      If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI[10] [11] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

      I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion:[12][13]

      One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are doing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would literally achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[14]

  1. 1.0 1.1 1.2 Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era" , in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993.
  2. The Technological Singularity by Murray Shanahan, (MIT Press, 2015), p. 233.
  3. Cite error: Invalid <ref> tag; no text was provided for refs named ulam1958
  4. Cite error: Invalid <ref> tag; no text was provided for refs named Singularity hypotheses
  5. 5.0 5.1 5.2 Cite error: Invalid <ref> tag; no text was provided for refs named chalmers2010
  6. Cite error: Invalid <ref> tag; no text was provided for refs named dooling2008-88
  7. Cite error: Invalid <ref> tag; no text was provided for refs named Allen2011
  8. 8.0 8.1 8.2 Cite error: Invalid <ref> tag; no text was provided for refs named ieee-lumi
  9. Cite error: Invalid <ref> tag; no text was provided for refs named modis2012
  10. Yampolskiy, Roman V. “Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. pp. 384–393.
  11. Eliezer Yudkowsky. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001
  12. Cite error: Invalid <ref> tag; no text was provided for refs named good1965
  13. Cite error: Invalid <ref> tag; no text was provided for refs named good1965-stat
  14. Cite error: Invalid <ref> tag; no text was provided for refs named yudkowsky1996