The Machine Intelligence Research Institute (MIRI)
(Redirected from MIRI)
Jump to navigation
Jump to search
The Machine Intelligence Research Institute (MIRI) is a non-profit interdisciplinary research centre founded in 2000 to ensure that the creation of smarter-than-human intelligence has a positive impact.
- Context:
- It is headquartered in Berkeley, California.
- …
- Counter-Example(s):
- See: Technological Singularity, Eliezer Yudkowsky, Friendly AI, Recursive Self Improvement, Futures Studies, Eliezer Yudkowsky, Luke Muehlhauser.
References
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute Retrieved:2014-11-1.
- The Machine Intelligence Research Institute (MIRI) is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. The organization advocates ideas initially put forth by I. J. Good and Vernor Vinge regarding an “intelligence explosion", or Singularity, which MIRI thinks may follow the creation of sufficiently advanced AI. [1] Research fellow Eliezer Yudkowsky coined the term Friendly AI to refer to a hypothetical super-intelligent AI that has a positive impact on humanity. [2] The organization has argued that to be "Friendly" a self-improving AI needs to be constructed in a transparent, robust, and stable way. [3] MIRI was formerly known as the Singularity Institute, and before that as the Singularity Institute for Artificial Intelligence. Luke Muehlhauser [4] is Executive Director. Inventor and futures studies author Ray Kurzweil served as one of its directors from 2007 to 2010. [5] The institute maintains an advisory board whose members include Oxford philosopher Nick Bostrom, biomedical gerontologist Aubrey de Grey, PayPal co-founder Peter Thiel, and Foresight Nanotech Institute co-founder Christine Peterson. It is tax exempt under Section 501(c)(3) of the United States Internal Revenue Code, and has a Canadian branch, SIAI-CA, formed in 2004 and recognized as a Charitable Organization by the Canada Revenue Agency.
- ↑ Intelligence Explosion Microeconomics writes: "MIRI is highly interested in trustworthy progress on this question that offers to resolve our actual internal debates and policy issues...", suggesting that MIRI considers whether an intelligence explosion will occur to be an open research problem.
- ↑ What is Friendly AI?
- ↑ MIRI Overview
- ↑ About Us
- ↑ I, Rodney Brooks, Am a Robot
2014
- http://intelligence.org/about/
- MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact.
2014
- http://intelligence.org/research/
- MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. We aim to make intelligent machines behave as we intend even in the absence of immediate human supervision. Much of our current research deals with reflection, an AI’s ability to reason about its own behavior in a principled rather than ad-hoc way. We focus our research on AI approaches that can be made transparent (e.g. principled decision algorithms, not genetic algorithms), so that humans can understand why the AIs behave as they do.
2014
- (Hawking, Russell, et al., 2014) ⇒ Stephen J. Hawking, Stuart J. Russell, Max Tegmark, and Frank Wilczek. (2014). “Transcendence Looks at the Implications of Artificial Intelligence - but are we taking AI seriously enough?." The Independent, May 2, 2014
- … Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.