Eliezer Yudkowsky
Jump to navigation
Jump to search
Eliezer Yudkowsky is a person.
- See: MIRI, Existential Risk from AGI, Machine Intelligence Research Institute, Friendly Artificial Intelligence, AI Safety, LessWrong, Intelligence Explosion.
References
- Personal Homepage: http://yudkowsky.net/
- Google Scholar Author Page: http://scholar.google.com/scholar?q=Eliezer%20Yudkowsky
2019
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Retrieved:2019-3-28.
- Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American AI researcher and writer best known for popularising the idea of friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. He has no formal secondary education, never having attended high school or college. His work on the prospect of a runaway intelligence explosion was an influence on Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.
2008
- (Yudkowsky, 2008) => Eliezer Yudkowsky. (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In: Global Catastrophic Risks, 1.
2003
- Eliezer Yudkowsky. (2003). “Creating friendly AI."