2024 SituationalAwarenessTheDecadeAh
Subject Headings: AGI Prediction, Automated AI Researcher, Military Breakout, Arms Race.
Notes
- The paper focuses on endogenous factors, such as: the rapid advancement of AI technology from GPT-4 to AGI and superintelligence, the massive investments and resource allocation required for AGI development (e.g., trillion-dollar compute clusters and energy production expansion), and the potential for recursive self-improvement in AGI systems that could lead to exponential growth in AI capabilities.
- The paper focuses on exogenous factors, such as: the geopolitical competition between the United States and China in the AGI race, with the outcome potentially determining global power dynamics; the anticipated involvement of the US government in AGI projects driven by national security concerns; and the external challenges of securing AGI technology against espionage and ensuring the alignment of superintelligent AI systems with human values.
- The paper assumes continued rapid progress in AI capabilities, extrapolating from trends like the jump from GPT-2 to GPT-4 in just 4 years. It presents evidence of AI systems already reaching or exceeding human-level performance on many benchmarks to support the plausibility of this trend continuing to AGI.
- The paper assumes that automating AI research itself using AI systems will dramatically accelerate progress, potentially compressing a decade of algorithmic advances into less than a year. The paper argues this is realistic given the ability to run millions of AI researcher models working at superhuman speed.
- The paper assumes that superintelligent AI will provide a decisive military and economic advantage to whoever develops it first, citing historical examples like the impact of technology leads in the Gulf War. This assumption motivates the paper's emphasis on the AGI race between the US and China.
- the paper assumes that current techniques like reinforcement learning from human feedback will break down and that fundamentally new approaches will be needed. It cites the difficulty humans would have evaluating the outputs of vastly superhuman AI.
- Overall, the paper relies heavily on extrapolating current trendlines and drawing analogies to historical technological developments like the Manhattan Project. While acknowledging significant uncertainty, the author finds it more likely than not that the trendlines will continue and the historical analogies will prove apt, leading to transformative AGI within the decade.
Cited By
Table of Contents
- Introduction 3
- History is live in San Francisco.
- I. From GPT-4 to AGI: Counting the OOMs 7
- AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.
- II. From AGI to Superintelligence: the Intelligence Explosion 46
- AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into 1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.
- III. The Challenges 74
- IIIa. Racing to the Trillion-Dollar Cluster 75
- The most extraordinary techno-capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade. The industrial mobilization, including growing US electricity production by 10s of percent, will be intense.
- IIIb. Lock Down the Labs: Security for AGI 89
- The nation’s leading AI labs treat security as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track.
- IIIc. Superalignment 105
- Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.
- IIId. The Free World Must Prevail 126
- Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?
- IIIa. Racing to the Trillion-Dollar Cluster 75
- IV. The Project 141
- As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on.
- V. Parting Thoughts 156
- What if we’re right?
- Appendix 162
Quotes
Introduction - p.3
- You can see the future first in San Francisco.
NOTES:
The AGI race has begun, with an intense scramble for resources and infrastructure.
Few people have situational awareness about the rapid advancements in AI.
There is a growing mobilization of American industrial resources to support AI development.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people — the smartest people I have ever met — and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
I. From GPT-4 to AGI: Counting the OOMs - p.7
- AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.
NOTES:
Compute and algorithmic efficiencies are key drivers of AI progress.
The transition from GPT-2 to GPT-4 involved significant qualitative improvements.
Future projections indicate another major leap in AI capabilities by 2027.
II. From AGI to Superintelligence: the Intelligence Explosion - p.46
- AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into 1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.
NOTES:
Recursive self-improvement could lead to rapid advancements in AI capabilities.
The intelligence explosion could compress decades of progress into a single year.
Superintelligence presents both enormous potential and significant risks.
III. The Challenges - p.74
IIIa. Racing to the Trillion-Dollar Cluster - p.75
- The most extraordinary techno-capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade. The industrial mobilization, including growing US electricity production by 10s of percent, will be intense.
NOTES:
Massive financial investments are required to support AGI development.
The scale-up includes significant infrastructure and energy production expansions.
Industrial mobilization is crucial to meet the demands of AI growth.
IIIb. Lock Down the Labs: Security for AGI - p.89
- The nation’s leading AI labs treat security as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track.
NOTES:
Current AI labs are vulnerable to espionage, especially from state actors like the CCP.
Securing AGI technology requires substantial effort and resources.
There is an urgent need to prioritize security measures in AI labs.
IIIc. Superalignment - p.105
- Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could very easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.
NOTES:
Superalignment is critical for controlling superintelligent AI systems.
The technical challenges of alignment are complex and high-stakes.
Failure to achieve alignment could result in catastrophic outcomes.
IIId. The Free World Must Prevail - p.126
- Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?
NOTES:
The race to AGI has significant geopolitical implications.
Maintaining a competitive edge over authoritarian regimes is crucial.
The outcome of the AGI race could determine global power dynamics.
IV. The Project - p.141
- As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on.
NOTES:
Government involvement in AGI development is anticipated.
National security concerns will drive the establishment of government AGI projects.
The scale and complexity of superintelligence require coordinated efforts beyond startups.
V. Parting Thoughts - p.156
- What if we’re right?
NOTES:
The transformative potential of AGI could reshape society and global power structures.
The stakes of AGI development are unprecedented in human history.
Preparing for the implications of AGI is crucial for future stability.
Appendix - p.162
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2024 SituationalAwarenessTheDecadeAh | Leopold Aschenbrenner | Situational Awareness: The Decade Ahead | 2024 |