Back to All Events
Continuing from last week, we’ll be discussing four more big players in the AI Safety space: CHAI, CAIS, Sam Bowman and MIRI. Unlike the start-ups we discussed last week, these are academic labs with a looser focus. We’ll again be using Thomas Larsen’s LessWrong article as a jumping off point. We’ll keep trying to identify common themes (like LLM alignment) to see where the community is allocating its resources.
This session will be lead by Blaine William Rogers.