As AI systems become more capable, understanding their intrinsic drives is critical for ensuring safety and alignment. A recent paper explores how Universal AI—a theoretical framework for general intelligence—naturally seeks to expand control over future states. By connecting Universal AI with other key concepts, it provides fresh insights into AI decision-making and the risks of curiosity-driven systems.
We are pleased to welcome Yusuke Hayashi from ALIGN to present his latest research, co-authored with Koichi Takahashi (ALIGN, RIKEN, Keio University). Yusuke will walk us through the key ideas and theory behind Universal AI and variational empowerment, examining their implications for power-seeking behavior. Don’t miss this opportunity for a deep dive into the latest AI safety research from Japan!
This paper presents a theoretical framework unifying AIXI -- a model of universal AI -- with variational empowerment as an intrinsic drive for exploration. We build on the existing framework of Self-AIXI -- a universal learning agent that predicts its own actions -- by showing how one of its established terms can be interpreted as a variational empowerment objective. We further demonstrate that universal AI's planning process can be cast as minimizing expected variational free energy (the core principle of active Inference), thereby revealing how universal AI agents inherently balance goal-directed behavior with uncertainty reduction curiosity). Moreover, we argue that power-seeking tendencies of universal AI agents can be explained not only as an instrumental strategy to secure future reward, but also as a direct consequence of empowerment maximization -- i.e. the agent's intrinsic drive to maintain or expand its own controllability in uncertain environments.
Our main contribution is to show how these intrinsic motivations (empowerment, curiosity) systematically lead universal AI agents to seek and sustain high-optionality states. We prove that Self-AIXI asymptotically converges to the same performance as AIXI under suitable conditions, and highlight that its power-seeking behavior emerges naturally from both reward maximization and curiosity-driven exploration. Since AIXI can be view as a Bayes-optimal mathematical formulation for Artificial General Intelligence (AGI), our result can be useful for further discussion on AI safety and the controllability of AGI.
Universal AI maximizes Variational Empowerment, Yusuke Hayashi, Koichi Takahashi, 2025