Neurons have an inherent capability to learn order relations: A theoretical foundation that explains numerous experimental data

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Brains are able to extract diverse relations between objects or concepts, and to integrate relational information into cognitive maps that form a basis for decision making. But is has remained a mystery how relational information is learnt and represented by neural networks of the brain. We show that a simple synaptic plasticity rule enables neurons to absorb and represent relational information, and to use it for abstract inference. This inherent capability of neurons and synaptic plasticity is supported by a rigorous mathematical theory. It explains experimental data from the human brain on the emergence of cognitive maps after learning several linear orders, it explains the terminal item effect that enhances transitive inference if a terminal item of an order is involved, and it provides a simple model for fast configuration of internal order representations in the face of new evidence. We also present a rigorous theoretical explanation for the surprising fact that 2D projections of neural representations of linear orders are curved, rather than linear. Since our model does not require stochastic gradient descent in deep neural networks for learning order relations, it is suited for porting the capability to learn multiple relations and using them for fast inference into edge devices with a low energy budget.

Article activity feed