ACX Tokyo is a thriving rationalish discussion group based in Nakameguro. They meet once a month, and this month’s topic is inside view / outside view conflicts around AI Safety.
As the nights grow longer and the days grow colder, it comes finally time to host an ACX on the topic of AI Safety. The ACX article is “The Extinction Tournament”: https://open.substack.com/pub/astralcodexten/p/the-extinction-tournament
The article gives the results of the recent Existential Risk Persuasion Tournament, a project from the Forecasting Research Institute gathering domain experts and superforecasters and asking them to put numbers to existential risk in AI, pandemics, and nuclear war. Surprisingly, the numbers were much lower than expected: superforecasters put extinction risk from AI at just 00.38%, compared with Yudkowsky’s >90.00%.
The article concludes with a short paragraph about whether we in the ACX community should update our x-risk probabilities way down in light of these results; ultimately Scott makes only a small update. I want to use this article primarily to jump into a discussion about the conflict between the inside and outside view arguments for speculative existential risks:
- forecasters who aren’t embedded in the ACX community give them low numbers
- AI Safety as a cause area is suspiciously appealing to introverts; you can save the world from the comfort of your bedroom, by doing maths research rather than solving hard coordination problems
- poor epistemic standards in the community surrounding exactly topics like these
During the session I’d like to go over these arguments in more detail, and compare AI Safety with the other domains surveyed in the XPT: do the inside and outside view arguments also disagree for pandemic risk? Nuclear war? Climate change?
Also, since this is probably the only time this year Harold will let me borrow your brains, I’d like to cover the outside view plan of action to actually do something about AI risk. If AI is so bad, why don’t we just… not?
Further reading:
- University EA Groups Need Fixing https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing documents some common issues with poor epistemics surrounding AI Safety
- Let’s think about slowing down AI https://blog.aiimpacts.org/p/lets-think-about-slowing-down-ai compares AI with other technologies, showing that actually humanity avoids building extremely valuable technologies all the time, so preventing AI is possible
- Pause for Thought: The AI Pause Debate https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate is Scott’s take on a recent debate about if and how much to slow AI progress