Back to All Events

Guest Ram Rachum Presents: Emergent Dominance Hierarchies in Reinforcement Learning Agents

In the realm of multi-agent reinforcement learning (MARL), the dynamics of emergent social structures are not only fascinating but also pivotal for designing cooperative and efficient AI systems. The paradigm of Cooperative AI encourages examining institutions and norms from human and animal societies and implementing them for AI agents. 

We are pleased to welcome Ram Rachum from Bar-Ilan University to discuss his paper, "Emergent Dominance Hierarchies in Reinforcement Learning Agents," which explores how populations of reinforcement learning agents naturally develop, enforce, and transmit dominance hierarchies through simple interactions, absent any programmed incentives or explicit rules.

We'll discuss the mechanics of these emergent behaviors, draw comparisons with biological systems, and consider the implications for the future integration of such dynamics into MARL systems. The goal is to enhance AI cooperation and efficiency by harnessing insights from the natural development of social structures. Join us as we explore these findings and their potential to promote AI interpretability and corrigibility.

Modern Reinforcement Learning (RL) algorithms are able to outperform humans in a wide variety of tasks. Multi-agent reinforcement learning (MARL) settings present additional challenges, and successful cooperation in mixed-motive groups of agents depends on a delicate balancing act between individual and group objectives. Social conventions and norms, often inspired by human institutions, are used as tools for striking this balance.

In this paper, we examine a fundamental, well-studied social convention that underlies cooperation in both animal and human societies: dominance hierarchies.

We adapt the ethological theory of dominance hierarchies to artificial agents, borrowing the established terminology and definitions with as few amendments as possible. We demonstrate that populations of RL agents, operating without explicit programming or intrinsic rewards, can invent, learn, enforce, and transmit a dominance hierarchy to new populations. The dominance hierarchies that emerge have a similar structure to those studied in chickens, mice, fish, and other species.

— Emergent Dominance Hierarchies in Reinforcement Learning Agents, Rachum et al. 2024

Previous
Previous
21 May

ML Benkyoukai: Kolmogorov-Arnold Networks

Next
Next
28 May

AIIF Masterclass: AI and Copyright