Roman Yampolskiy (1979-)
Jump to navigation
Jump to search
Roman Yampolskiy (1979-) is a person.
- See: AI Explainability, Behavioral Biometrics, Intellectulogy Solutions Inc., Existential ASI Risk, Theoretical Limits in AI.
References
2024
- https://youtu.be/NNr6gPelJ3E?si=bu7bypQEBs-xsCdL
- NOTES:
- Roman posits that there is an over 99% probability that advanced AGI will ultimately destroy human civilization, emphasizing the extreme difficulty in maintaining control over AGI once it surpasses a certain capability threshold.
- He categorizes risks beyond mere extinction, identifying suffering risks (s-risks) where individuals wish for death despite being kept alive, and "ikigai risks (i-risks)" where humanity loses its sense of meaning and purpose in a world dominated by AI.
- Roman argues that current AI safety efforts, such as verification and mathematical proofs, are insufficient to completely solve the control problem for AGI, though they may reduce risks for less advanced systems.
- He advocates for a full pause or moratorium on AGI development until the control problem is resolved, potentially necessitating a permanent ban if an adequate solution is unattainable, though he is skeptical this will occur.
- Roman entertains the simulation hypothesis, suggesting it is highly probable that we are living in a simulation, and emphasizes the importance of determining whether an AI can "break out" of a simulated environment.
- He acknowledges the possibility of being wrong about the dangers of AGI, expressing his hope to be proven incorrect, yet insists on the importance of addressing these issues due to the significant potential downside risks.
- The conversation underscores the need for serious consideration of AI's long-term trajectories and risks, advocating for an open dialogue between AI developers and those focused on mitigating catastrophic risks to navigate future challenges effectively.
- NOTES:
2024
- Perplexity
- Roman Vladimirovich Yampolskiy is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence (AI) safety. Here are some key points about him based on the provided search results:
- Background
- Born on August 13, 1979 in Russia [1]
- Holds a PhD from the University at Buffalo (2008) [1]
- Currently a tenured associate professor in the department of Computer Engineering and Computer Science at the University of Louisville [1][8]
- Director of the Cyber Security Lab at the University of Louisville [1][7]
- AI Safety Research
- Warns about the potential existential risk from advanced AI and advocates for research into "boxing" or containing AI systems [1][10]
- Proposed introducing "Achilles' heels" into potentially dangerous AI systems to limit their capabilities [1]
- Applies a "security mindset" to AI safety by analyzing potential outcomes to evaluate safety mechanisms [1]
- Launched the field of "intellectology" in 2015 to analyze the forms and limits of intelligence [1]
- Published the books "Artificial Superintelligence: a Futuristic Approach" and "AI Safety and Security" [1][13]
- His early work on AI safety engineering, AI containment, and AI accidents is seminal and highly cited in the field [8]
- Current research focuses on theoretical limits to explainability, predictability and controllability of advanced AI systems [8][14][15]
- Positions and Recognition
- Fellow of the Machine Intelligence Research Institute (2010), Foresight Institute (2019), and Research Associate of the Global Catastrophic Risk Institute (2018) [8]
- Published over 100 peer-reviewed papers and given over 100 public talks on AI safety [8]
- Served on program committees of AI safety conferences and journal editorial boards [8]
- Received awards for teaching and service to the AI safety community [8]
- In summary, Roman Yampolskiy is a leading researcher in the field of AI safety, focusing on analyzing and mitigating potential risks from advanced AI systems through theoretical work, security approaches, and philosophical considerations. [1][8][13][14][15]
- Citations:
[1] https://en.wikipedia.org/wiki/Roman_Yampolskiy [2] https://www.linkedin.com/in/romanyam [3] https://daily-philosophy.com/interview-roman-yampolskiy-dangers-of-ai/ [4] https://futureoflife.org/podcast/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ [5] https://www.youtube.com/watch?v=x_3bKuBHYWw [6] https://www.youtube.com/watch?v=-TwwzSTEWsw [7] http://cecs.louisville.edu/ry/ [8] https://futureoflife.org/person/prof-roman-yampolskiy/ [9] https://scholar.google.com/citations?hl=en&user=0_Rq68cAAAAJ [10] https://screenrant.com/control-superintelligent-ai-impossible-why/ [11] https://ieet.org/fellows/ [12] https://www.goodreads.com/book/show/197554072-ai [13] https://www.routledge.com/Artificial-Intelligence-Safety-and-Security/Yampolskiy/p/book/9780815369820 [14] https://journals.riverpublishers.com/index.php/JCSANDM/article/view/16219 [15] https://www.routledge.com/AI-Unexplainable-Unpredictable-Uncontrollable/Yampolskiy/p/book/9781032576268 [16] https://twitter.com/romanyam?lang=en [17] https://irishtechnews.ie/humanitys-biggest-gamble-with-roman-yampolskiy/ [18] https://podcasters.spotify.com/pod/show/irish-tech-news/episodes/The-future-is-agile-the-time-to-take-responsibility-for-our-future-is-NOW-e14dr3q
2024
- (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Roman_Yampolskiy Retrieved:2024-5-21.
- Roman Vladimirovich Yampolskiy (born 13 August 1979) is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety. He holds a PhD from the University at Buffalo (2008). He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering. Yampolskiy is an author of some 100 publications, including numerous books.
2024
- (Yampolskiy, 2024) ⇒ Roman Yampolskiy. (2024). “AI: Unexplainable, Unpredictable, Uncontrollable.” CRC Press. ISBN:9781032576268
2018
- (Brundage et al., 2018) ⇒ Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. (2018). "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation." arXiv preprint arXiv:1802.07228.
- NOTE: It explores potential threats and strategies for safeguarding against the malicious use of AI.
2016
- (Yampolskiy & Spellchecker, 2016) ⇒ Roman V. Yampolskiy, and M. S. Spellchecker. (2016). "Artificial intelligence safety and cybersecurity: A timeline of AI failures." arXiv preprint arXiv:1610.07997.
- NOTE: It examines past AI failures to identify patterns and potential future risks in AI safety and cybersecurity.
2019
- (Yampolskiy, 2019) ⇒ Roman V. Yampolskiy. (2019). "Predicting Future AI Failures from Historic Examples." Foresight 21, no. 1 (2019): 138-152.
- NOTE: It focuses on drawing lessons from historical AI failures to forecast and prevent similar incidents in the future.
2008
- (Yampolskiy & Govindaraju, 2008) ⇒ Roman V. Yampolskiy, and Venu Govindaraju. (2008). "Behavioural biometrics: a survey and classification." International Journal of Biometrics 1, no. 1 (2008): 81-113.
- NOTE: It provides a comprehensive overview of behavioral biometrics, categorizing different methods and discussing their effectiveness.