Eliezer Yudkowsky and Nate Soares are two of the scientists from the old Singularity Institute, now called the Machine Intelligence Research Institute. Yudkowsky is one of the founders of AI Safety as a field, and has always been unapologetically pessimistic about our prospects at controlling an intelligence that greatly surpasses human intellect.
In their new book If Anyone Builds It, Everyone Dies, they explore and re-explain the foundational concepts that have motivated them for more than 15 years.
In this month's AI Safety Tokyo meetup, we will revisit these core ideas as presented by their new book. We will discuss if they are still relevant in this day and age - or potentially more relevant than ever.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
Eliezer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies (2025)