Back to All Events

Reinforcement Learning from AI Feedback

A recent focus of ours has been reinforcement learning from human feedback, a technique for aligning AI (particularly large language models) to human preferences. A fundamental limitation of these approaches is the cost / quality tradeoff in the collection of the feedback from humans; most language models are trained on binary preference data (“which of these two continuations is better”) because humans can provide that feedback quickly and cheaply, not because it’s the optimal kind of data to train a language model on.

A natural instinct, at least if you are an ML engineer, is to throw AI at the problem. Can we train an AI to give the feedback, freeing us from the cost of human annotation? As early as March 2023, Chat-GPT was outperforming crowdworkers in text annotation tasks. Perhaps we are at the point where an AI can bootstrap itself to alignment?

Using a recent paper from Guo, Zhang et al. as an excuse, this week Blaine will cover all things AI feedback, from self-instruct to constitutional AI to OAIF. We’ll survey available techniques, see whether they work, and speculate about what happens as we take AI feedback to its natural conclusion.

Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.

Guo, Zhang et al., “Direct Language Model Alignment from Online AI Feedback”, ArXiv preprint 2024

Previous
Previous
19 March

AIIF Weekly Masterclass: AI Use

Next
Next
28 March

ML Benkyoukai: SORA