Back to All Events

Situational Awareness: The Decade Ahead

Just one month ago, Leopold Aschenbrenner released a series of papers touching on his (and others’) thoughts on how AI, and further AGI and ASI, will shape the future of humanity. Aschenbrenner emphasizes the potential risks associated with the development of superintelligence, including the creation of new means of mass destruction that could pose existential threats to humanity. He also urges the need for vigilance in safeguarding freedom, democracy, and global stability in the coming century. Additionally, he touches on the challenges posed by authoritarian regimes, and the ethical considerations surrounding the use of advanced technologies.

Aschenbrenner’s conclusions are, as we say in the industry, Big If True. He recently founded an investment firm (with anchor investments from the cofounders of Stripe) to correct market mispricings due to underestimation of AGI, the ultimate smart-man-becomes-rich-man move. On the other hand, although he recently worked at OpenAI, he notes that "all of this is based on publicly-available information, [his] own ideas, general field-knowledge, or SF-gossip.”

Should we all dump our life savings into Nvidia stocks, or is Mr. Aschenbrenner caught up in his own hype? Let’s find out.

“The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.

Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change. 

Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride. 

Let me tell you what we see.”

- Aschenbrenner, Leopold. “SITUATIONAL AWARENESS: The Decade Ahead.” June 2024.

Previous
Previous
9 July

AIIF Masterclass: Japan's AI Frontier: Pioneering Regulatory Innovation

Next
Next
7 August

Scaling and Evaluating Sparse Autoencoders