Back to All Events

International AI Safety Report

  • Kojima Law Offices (map)

As general-purpose AI systems advance, policymakers trying to understand their risks face two key issues. One is the “evidence dilemma”: technology is moving fast, yet evidence about its risks and safeguards is slow to accumulate. The other is the “evaluation gap”: benchmark results can no longer predict real-world utility or risk reliably. The International AI Safety Report tackles this by synthesizing research around three guiding questions: what can AI do today, what emerging risks it poses, and which mitigations exist. Importantly, it does not offer specific insights or recommendations, but serves as an evidence base for policymakers.

In our next benkyoukai, we will go over the report's risk categorization, new developments since last year, and proposed mitigation techniques. The session will help us zoom out and view the bigger picture of AI safety and governance. Join us to discuss the current landscape and find ways to put the report’s conclusions into practice!

The second International AI Safety Report, published in February 2026, is the next iteration of the comprehensive review of latest scientific research on the capabilities and risks of general-purpose AI systems. Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, the report is backed by over 30 countries and international organisations. It represents the largest global collaboration on AI safety to date.

The Report’s core findings include general-purpose AI capabilities, emerging risks, and risk management approaches. It covers how AI capabilities are advancing, what real-world evidence is emerging for key risks, and progress and remaining limitations in technical, institutional, and societal risk management measures.

— Y. Bengio, S. Clare, C. Prunkl, M. Murray, et al., International AI Safety Report (2026)

Previous
Previous
25 March

AGI Timeline Trends