Back to All Events

Putting numbers to p(doom)

As a group, AI Safety 東京 is surprisingly positive about humanity’s future. Prominent ideologues like Eliezer Yudkowsky put the probability of human extinction close to 100%; are we failing to properly engage with their arguments, or do we have some collective insight worth sharing?

Join us this week for a different kind of benkyōkai where we work together to actually crunch the numbers. Like superforecasters, we’ll enumerate scenarios, break them down, then multiply like our lives depend on it. In so doing, we’ll either realise the error of our ways or construct a legible, technical argument for why things are safer than they appear.

Previous
Previous
19 July

Symbolic Distillation

Next
Next
2 August

The Limit Is Not The Sequence