Superintelligence Explosion Period

From GM-RKB
Jump to navigation Jump to search

A Superintelligence Explosion Period is a critical transition time period during which AI systems attain the AI capability to autonomously enhance their own cognitive architectures, triggering a self-perpetuating cycle of increasingly rapid intelligence improvement.



References

2025-03-12

[1] http://www.gabormelli.com/RKB/Superintelligence_Explosion_Period
[2] https://www.linkedin.com/pulse/amplification-intelligence-recursive-self-improvement-gary-ramah-0wjpc
[3] https://www.zignuts.com/blog/what-is-agi
[4] https://www.gabormelli.com/RKB/AI_Recursive_Self-Improvement
[5] https://www.allaboutai.com/ai-glossary/recursive-self-improvement/
[6] https://www.forbes.com/sites/gilpress/2024/03/29/artificial-general-intelligence-or-agi-a-very-short-history/
[7] https://www.linkedin.com/pulse/recursion-foundation-seed-ai-architectures-mechanisms-gary-ramah-el2jc
[8] https://en.wikipedia.org/wiki/Technological_singularity
[9] https://www.lesswrong.com/w/recursive-self-improvement
[10] https://shelf.io/blog/the-evolution-of-ai-introducing-autonomous-ai-agents/
[11] https://www.sapien.io/glossary/definition/intelligence-explosion
[12] https://en.wikipedia.org/wiki/Artificial_general_intelligence
[13] https://www.greaterwrong.com/posts/byKF3mnaNRrbkDPWv/evidence-on-recursive-self-improvement-from-current-ml

2024

2024

  • GPT-4
    • ASI Expansion Period (Hypothetical: Early to Mid-22nd Century):
      • It involves the application of Superintelligence in global systems.
      • It aims to address complex global challenges such as climate change, poverty, or disease.
      • It raises significant concerns about control and safety due to its immense capabilities.
      • It highlights the potential misalignment between Superintelligence goals and human well-being.
    • ASI Explosion Period (Hypothetical: Mid-22nd Century and Beyond):
      • It is often associated with the concept of a technological "singularity."
      • It represents a period of unpredictable and rapid advancement in Superintelligence.
      • It could lead to a complete transformation of human society, technology, and possibly biology.
      • It presents a future where the outcomes and impacts of Superintelligence are beyond human comprehension.

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion Retrieved:2023-8-1.
    • Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.

      If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI[1] [2] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

      I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion:[3][4]

      One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are doing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would literally achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[5]

2017a

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/technological_singularity#Manifestations Retrieved:2017-2-28.
    • I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.

2017b

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/intelligence_explosion Retrieved:2017-2-28.
    • The intelligence explosion is the expected outcome of the hypothetically forthcoming technological singularity, that is, the result of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to the emergence of ASI (artificial superintelligence), the limits of which are unknown.

      The notion of an "intelligence explosion" was first described by , who speculated on the effects of superhuman machines, should they ever be invented:

      Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[6] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humans.[7] If a superhuman intelligence were to be invented —either through the amplification of human intelligence or through artificial intelligence — it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI[1] [2] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

2012

  1. 1.0 1.1 Yampolskiy, Roman V. “Analysis of types of self-improving software." Artificial General Intelligence. Springer International Publishing, 2015. pp. 384–393. Cite error: Invalid <ref> tag; name "Yampolskiy, Roman V 2015" defined multiple times with different content
  2. 2.0 2.1 Eliezer Yudkowsky. General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement, 2001
  3. Cite error: Invalid <ref> tag; no text was provided for refs named good1965
  4. Cite error: Invalid <ref> tag; no text was provided for refs named good1965-stat
  5. Cite error: Invalid <ref> tag; no text was provided for refs named yudkowsky1996
  6. Ehrlich, Paul. The Dominant Animal: Human Evolution and the Environment
  7. Superbrains born of silicon will change everything.

2011

1990

1988

1965