AI capabilities are advancing at a dizzying pace—surpassing human benchmarks and transforming entire fields in rapid succession. What would happen if this rate of progress simply continues uninterrupted? That’s the question at the heart of AI 2027, a near-future scenario created by a group of superforecasters. It outlines a plausible path to superintelligence in just three years.
How likely is the scenario and what do other AI Safety Tokyo members think about it? Join us in the next benkyoukai, where Harold Godsoe will walk us through AI 2027 and discuss its implications on our future.
AI 2027 outlines a fast-paced scenario in which artificial superintelligence (ASI) emerges by the end of 2027. The forecast describes a sharp acceleration driven by superhuman coding systems, recursive self-improvement, and rapidly scaling compute, with AI models surpassing human capabilities across most domains. This progression begins with breakthroughs in AI research automation and culminates in systems capable of independent scientific and strategic reasoning.
The scenario highlights both the transformative potential and serious risks of ASI, including misalignment, instability, and loss of human control. It emphasizes the importance of preparing for near-term disruption, and underscores the urgent need for global coordination, safety research, and proactive governance.
Kokotajlo, et al., AI 2027 (2025). Summary by Claude.