Leopold Aschenbrenner
Jump to navigation
Jump to search
A Leopold Aschenbrenner is a person.
- Context:
- They have investigated AGI Alignment, AI-based Economic Growth, and Existential AI Risk Research.
- ...
- Example(s):
- See: AI Alignment, Global Priorities Institute, Economic Growth, Existential Risks, Effective Altruism, Military Breakout, Automated AI Researcher.
References
2024
- (Aschenbrenner & Patel, 2024) ⇒ Leopold Aschenbrenner and Dwarkesh Patel. (2024). “2027 AGI, China/US Super-Intelligence Race, & The Return of History.” In: Dwarkesh Podcast, Jun 4, 2024. [1]
- (Aschenbrenner, 2024) ⇒ Leopold Aschenbrenner. (2024). “Situational Awareness: The Decade Ahead.”
2024
- https://the-decoder.com/openai-fires-two-ai-safety-researchers-for-alleged-leaks/
- QUOTE: OpenAI has fired two AI safety researchers for allegedly leaking confidential information after an internal power struggle ... One of those fired was Leopold Aschenbrenner, a former member of the "Superalignment" team, which focuses on the governance of advanced artificial intelligence and is tasked with developing techniques to control and manage a potential super-AI. The other fired employee, Pavel Izmailov, worked on AI reasoning and was also part of the safety team for a time, ...
2024
- (Burns et al., 2023) ⇒ Burns, C., Izmailov, P., Kirchner, J.H., Baker, B., Gao, L., Aschenbrenner, L., et al. (2023). "Weak-to-strong generalization: Eliciting strong capabilities with weak supervision." arXiv preprint arXiv:2312.09390.
- QUOTE: “Weak-to-strong generalization aims to develop robust AI capabilities with minimal supervision, focusing on improving alignment through scalable oversight.”
2020
- (Aschenbrenner, 2020) ⇒ Aschenbrenner, Leopold. (2020). "Existential risk and growth." Globalprioritiesinstitute.org, GPI Workin, 0-84.
- QUOTE: “Existential risk and growth explores the long-term implications of economic growth on global existential risks, advocating for balanced progress to mitigate potential dangers.”