Leopold Aschenbrenner

From GM-RKB
Jump to navigation Jump to search

A Leopold Aschenbrenner is a person.



References

2024

2024

  • https://the-decoder.com/openai-fires-two-ai-safety-researchers-for-alleged-leaks/
    • QUOTE: OpenAI has fired two AI safety researchers for allegedly leaking confidential information after an internal power struggle ... One of those fired was Leopold Aschenbrenner, a former member of the "Superalignment" team, which focuses on the governance of advanced artificial intelligence and is tasked with developing techniques to control and manage a potential super-AI. The other fired employee, Pavel Izmailov, worked on AI reasoning and was also part of the safety team for a time, ...

2024

  • (Burns et al., 2023) ⇒ Burns, C., Izmailov, P., Kirchner, J.H., Baker, B., Gao, L., Aschenbrenner, L., et al. (2023). "Weak-to-strong generalization: Eliciting strong capabilities with weak supervision." arXiv preprint arXiv:2312.09390.
    • QUOTE: “Weak-to-strong generalization aims to develop robust AI capabilities with minimal supervision, focusing on improving alignment through scalable oversight.”

2020

  • (Aschenbrenner, 2020) ⇒ Aschenbrenner, Leopold. (2020). "Existential risk and growth." Globalprioritiesinstitute.org, GPI Workin, 0-84.
    • QUOTE: “Existential risk and growth explores the long-term implications of economic growth on global existential risks, advocating for balanced progress to mitigate potential dangers.”