Back to All Events

Fundamental Limitations of Reinforcement Learning from Human Feedback

A few weeks ago at the ML benkyoukai we talked about direct preference optimization, a successor to reinforcement learning from human feedback (RLHF) that addresses some of its technical limitations. But there are also more fundamental limitations with RLHF: are your evaluators aligned, either with each other or with humanity at large? Are their preferences worth learning? Are humans even capable of evaluating the performance of your model without being mistaken or misled?

In this session we’ll review a review paper that collects issues with RLHF, classifies them as tractable or fundamental, and suggests mitigations both technical and societal.

Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.

“Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback”, Casper and Davies et al. 2023

Previous
Previous
13 March

AIIF Weekly Masterclass: AI Landscapes

Next
Next
19 March

AIIF Weekly Masterclass: AI Use