Back to All Events

Othello: Transformers Have (Emergent) World Models

Does GPT reason only about statistical relationships in text, or does it predict text by modelling the world? Many x-risk paths that worry alignment researchers involve a rogue AI taking action in the world to the detriment of humanity, but to act in the world with purpose one first needs to know that there is a world to act in. Using a technique called probing, Li et al. and Nanda show that a GPT-like autoregressive model trained to predict the next move in games of Othello does indeed develop a world model—you can read off the board state from the activations and use it to causally intervene on the model’s predictions. In this ambitious two hour seminar, Blaine Rogers will lead us through this exciting research, explaining linear and nonlinear probing, showing how they relate to other LLM interpretability techniques like circuits and feedforward key/value maps, and speculating on safety concerns particularly with relation to simulacra.

Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question in a synthetic setting by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network. By leveraging these intervention techniques, we produce “latent saliency maps” that help explain predictions.
http://arxiv.org/abs/2210.13382

hot damn, check out this graph

https://www.neelnanda.io/mechanistic-interpretability/othello

Previous
Previous
19 April

Transformer Feedforward Layers Are Key-Value Memories

Next
Next
10 May

Are Emergent Abilities of Large Language Models a Mirage?