Back to All Events

What Everyone in Technical Alignment is Doing and Why: CHAI, CAIS, Sam Bowman and MIRI

Continuing from last week, we’ll be discussing four more big players in the AI Safety space: CHAI, CAIS, Sam Bowman and MIRI. Unlike the start-ups we discussed last week, these are academic labs with a looser focus. We’ll again be using Thomas Larsen’s LessWrong article as a jumping off point. We’ll keep trying to identify common themes (like LLM alignment) to see where the community is allocating its resources.

This session will be lead by Blaine William Rogers.

Previous
Previous
25 January

What Everyone in Technical Alignment is Doing and Why: Anthropic, OpenAI, DeepMind Safety, Conjecture

Next
Next
8 February

The EU AI Act