Back to All Events

AIIF Masterclass: AI Inside

This Masterclass event will be held on April 16, 2024, at 16:00 (Japan time). Hybrid (online and in-person) access is available. For access: contact@aiindustryfoundation.org.

----

Most of us are familiar with the idea of “sleeper agents” from cheesy action movies from the 80’s and 90’s. For those of you not familiar: A "sleeper agent" refers to an individual who is covertly planted in a target area or organization, with the purpose of remaining inconspicuous until “activated” to carry out a specific mission or task. These agents are often trained to blend into their surroundings and may live seemingly normal lives until they receive instructions to engage in espionage or other covert activities.

Much as spies worry about sleeper agents in their midst, alignment researchers worry about deceptively aligned AIs. If someone were to fine-tune an AI to take malicious actions when presented with trigger words, would our most popular alignment techniques (reinforcement learning, supervised fine-tuning and adversarial training) be enough to unlearn the behaviour? In a recent paper, Hubinger et al. perform experiments with Anthropic’s Claude and find that the answer is no.

This interesting result lies in tension with other papers that claim that fine-tuning is easy to undo, even by accident. This week we’ll use Hubinger’s paper as a framework to discuss which behaviours are easy to learn and unlearn, and what it is about sleeper agent patterns that make them so sticky.

Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.

- Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, Hubinger et al. 2024

Previous
Previous
10 April

Guest Alex Spies Presents: “Structured World Representations in Maze-Solving Transformers”

Next
Next
17 April

The Gladstone AI Reports: a drafted AI Bill, and AI Trajectories