Neural sampling from cognitive maps enables goal-directed imagination and planning
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI systems are becoming increasingly more intelligent, but at a very high cost in terms of energy consumption and training requirements. In contrast, our brains only require 20W of energy, they learn online, and they can instantly adjust to changing contingencies. This begs the question what data structures, algorithms, and learning methods enable brains to achieve that, and whether these can be ported into artificial devices. We are addressing this question for a core feature of intelligence: The capacity to plan and solve problems, including new problems that involve states which were never encountered before. We examine from the algorithmic perspective three tools that brains are likely to employ for achieving that: Stochastic neural computation, cognitive maps, and compositional coding. We integrate these tools into a transparent neural network model without hidden units, and demonstrate its power for flexible planning and problem solving. Importantly, this approach is suitable for implementation by in-memory computing and other energy-efficient neuromorphic hardware. In particular, it only requires self-supervised local synaptic plasticity that is suited for on-chip learning. Hence a core feature of brain intelligence, the capacity to generate solutions to problems that were never encountered before, does not require deep neural networks or large language models, and can be implemented in energy-efficient edge devices.