While amazing progress has been made on AI and in particular deep neural networks, these systems ultimately all run on regular computers—that is, they run on a von-Neumann architecture using silicon transistors and simple logic gates. When studying natural intelligence on the other hand, we do not observe any such similarities—what is the reason for this? And can we learn from it?
Neuromorphic Computing (also called brain-inspired computing) is a field that encompasses a wide variety of approaches, from spiking neural chips to trainable physical hardware to photonic computing. Currently, none of these systems truly rival the state-of-the-art, but could they one day? André Röhm, researcher at Tokyo University and benkyoukai regular, has kindly volunteered to give us an overview.