Back to All Events

Editing Activations for Fun and Profit

What if there was a simple way to change a neural network’s behaviour? One that required minimal training data, and could align a network to any goal? One that could make LLMs more truthful, more friendly, or more obsessed with weddings? Activation editing promises to do just that.

This Wednesday Blaine Rogers will review work from Alex Turner (CHAI@Berkeley) and Li & Patel (Harvard) using activation editing to steer LLMs and maze solving agents. Does it work? Is it a satisfying solution to the alignment problem? The answer to both of these questions, as always, is “maybe”.

We introduce Inference-Time Intervention (ITI), a technique designed to enhance the truthfulness of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from 32.5% to 65.1%. We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface.

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

Previous
Previous
28 June

Separation of Capabilities in LLMs

Next
Next
12 July

Are We Missing The Forest For The GPTrees?