Superintelligences Expansion Period
Jump to navigation
Jump to search
A Superintelligences Expansion Period is a time period when superintelligences expand into the Universe.
- Context:
- It can (typically) be preceded by a Superintelligence Explosion Period.
- It can (often) involve significant advancements in AI Hardware and AI Software, and Autonomous Robots.
- It can conclude when all (accessible) Free Energy is depleted or is inaccessible.
- ...
- Example(s):
- It may have already occurred elsewhere in the universe (in an extraterrestial superintelligence expansion period).
- …
- Counter-Example(s):
- See: Machine Productivity, Machine Capability.
References
2024
- GPT-4
- ASI Emergence Period (Hypothetical: Late 21st Century - Early 22nd Century):
- It follows the advanced development of AGI, potentially occurring in the late 21st or early 22nd century.
- It marks the transition to an intelligence far superior to human cognition.
- It involves the development of entities that surpass human abilities in all domains.
- It represents the initial phase of true Superintelligence.
- ASI Expansion Period (Hypothetical: Early to Mid-22nd Century):
- It involves the application of Superintelligence in global systems.
- It aims to address complex global challenges such as climate change, poverty, or disease.
- It raises significant concerns about control and safety due to its immense capabilities.
- It highlights the potential misalignment between Superintelligence goals and human well-being.
- ASI Explosion Period (Hypothetical: Mid-22nd Century and Beyond):
- It is often associated with the concept of a technological "singularity."
- It represents a period of unpredictable and rapid advancement in Superintelligence.
- It could lead to a complete transformation of human society, technology, and possibly biology.
- It presents a future where the outcomes and impacts of Superintelligence are beyond human comprehension.
- ASI Emergence Period (Hypothetical: Late 21st Century - Early 22nd Century):
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion Retrieved:2014-2-22.
- The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature. Since the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable. The first use of the term "singularity" in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain. Proponents of the singularity typically postulate an "intelligence explosion",[1] [2] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030. At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there's an 80% probability that the singularity will occur between 2017 and 2112.
- The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature. Since the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable. The first use of the term "singularity" in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain. Proponents of the singularity typically postulate an "intelligence explosion",[1] [2] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.
- ↑ David Chalmers on Singularity, Intelligence Explosion. April 8th, 2010. Singularity Institute for Artificial Intelligence: http://singinst.org/blog/2010/04/08/david-chalmers-on-singularity-intelligence-explosion/
- ↑ Editor's Blog Why an Intelligence Explosion is Probable By: Richard Loosemore and Ben Goertzel. March 7, 2011; hplusmagazine: http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/
2014
- (Bostrom, 2014) ⇒ Nick Bostrom. (2014). “Superintelligence: Paths, Dangers, Strategies.” Oxford University Press. ISBN:978-0199678112
- QUOTE: If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
2013
- Noam Chomsky. (2013). “The Singularity is Science Fiction!." http://www.youtube.com/watch?v=0kICLG4Zg8s
- QUOTE: we are "eons away" from building an AGI ... Singularity is "science fiction"
2012
- (Chalmers, 2012) ⇒ David J. Chalmers. (2012). “The Singularity: A Reply." In: Journal of Consciousness Studies, 19(9-10).
- The target article set out an argument for the singularity as follows.
- Here AI is human-level artificial intelligence, AI+ is greater-than-human-level artificial intelligence, and AI++ is far-greater-than-human-level artificial intelligence (as far beyond smartest humans as humans are beyond a mouse). “Before long” is roughly “within centuries” and “soon after” is “within decades”, though tighter readings are also possible. Defeaters are anything that prevents intelligent systems from manifesting their capacities to create intelligent systems, including situational defeaters (catastrophes and resource limitations) and motivational defeaters (disinterest or deciding not to create successor systems).
2011
- (Allen & Greaves, 2011) ⇒ Paul G. Allen, and Mark Greaves. (2011). “Paul Allen: The Singularity Isn't Near.” In: Technology Review, 10/12/2011.
- QUOTE: Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. ... By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045. This prediction seems to us quite far-fetched. ... Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.
2010
- (Chalmers, 2010) ⇒ David Chalmers. (2010). “The Singularity: A Philosophical Analysis.” In: Journal of Consciousness Studies, 17(9-10). ** ...
- QUOTE: What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the 'singularity'.
2008
- Steven Pinker ↑ http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity
- QUOTE: There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.
2005
- (Kurzweil, 2005) ⇒ Ray Kurzweil. (2005). “The Singularity is Near: When humans transcend biology." Penguin.
2000
- (Joy, 2000) ⇒ Bill Joy. (2000). “Why the Future Doesn't Need Us.” In: Wired Magazine, 8.04
1999
- (Kurzweil, 1999) ⇒ Ray Kurzweil. (1999). “The Age of Spiritual Machines: When computers exceed human intelligence." Viking Press. ISBN:0-670-88217-8
1993
- (Vinge, 1993) ⇒ Vernor Vinge. (1993). “The Coming Technological Singularity.” In: Whole Earth Review Journal, 81.
1990
- (Kurzweil, 1990) ⇒ Ray Kurzweil (editor). (1990). The Age of Intelligent Machines." MIT press, ISBN:0262610795
1988
- (Moravec, 1988) ⇒ Hans Moravec. (1988). “Mind Children." Harvard University Press. ISBN:9780674576186
1965
- (Good, 1965) ⇒ Irving John Good. (1965). “Speculations Concerning the First Ultraintelligent Machine.” In: Advances in computers Journal, 6(31).