Machine Learning Discovers Numerous New Computational Principles Supporting Elementary Motion Detection

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Motion direction detection is a fundamental visual computation that transforms spatial luminance patterns into directionally tuned outputs. Classical models of direction selectivity rely on temporal asymmetry, where motion detection arises through either delayed excitation or inhibition. Here, I used biologically inspired machine learning applied to retinal and cortical circuits to uncover multiple novel feedforward architectures capable of direction selectivity.

These include mechanisms based on asymmetric synaptic properties, spatial receptive field variations, new roles for pre- and postsynaptic inhibition, and previously unrecognized kinetic implementations. Conceptually, these circuit architectures cluster into eight computational primitives underlying motion detection, four of which are newly discovered. Many of the solutions rival or outperform classical models in both robustness and precision, and several exhibit enhanced noise tolerance. All mechanisms are biologically plausible and correspond to known physiological and anatomical motifs, offering fresh insights into motion processing and illustrating how machine learning can uncover general principles of neural computation.

Article activity feed