When prominent alignment ideologues say “AI will kill us all!” the most frequent questions from the public are
“How, exactly, will it kill us all?” and
“Why would we build an AI that kills us all? Let’s just not do that.”
This week, André Röhm will walk us through Dan Hendrycks’ recent essay Natural Selection Favors AIs over Humans, which gives a lucid evolutionary argument for why competition between AIs will naturally lead to bad behaviour. In so doing, it gives us a convincing narrative of how humanity might accidentally lose control of its future, one that doesn’t rely on ingroup jargon like “orthogonality thesis” and “convergent instrumental subgoals”. André will give us a condensed reading of the essay’s thesis, discuss whether it holds up under scrutiny, and comment on its pedagogical value—has reading this essay made it easier to convince people that AI X-risk is real and urgent?
For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future. More abstractly, we argue that natural selection operates on systems that compete and vary, and that selfish species typically have an advantage over species that are altruistic to other species. This Darwinian logic could also apply to artificial agents, as agents may eventually be better able to persist into the future if they behave selfishly and pursue their own interests with little regard for humans, which could pose catastrophic risks. To counteract these risks and Darwinian forces, we consider interventions such as carefully designing AI agents' intrinsic motivations, introducing constraints on their actions, and institutions that encourage cooperation. These steps, or others that resolve the problems we pose, will be necessary in order to ensure the development of artificial intelligence is a positive one.