Multitasking Recurrent Networks Utilize Compositional Strategies for Control of Movement

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The brain and body comprise a complex control system that can flexibly perform a diverse range of movements. Despite the high-dimensionality of the musculoskeletal system, both humans and other species are able to quickly adapt their existing repertoire of actions to novel settings. A strategy likely employed by the brain to accomplish such a feat is known as compositionality, or the ability to combine learned computational primitives to perform novel tasks. Previous works have demon-strated that recurrent neural networks (RNNs) are a useful tool to probe compositionality during diverse cognitive tasks. However, the attractor-based computations required for cognition are largely distinct from those required for the generation of movement, and it is unclear whether compositional structure extends to RNNs producing complex movements. To address this question, we train a multitasking RNN in feedback with a musculoskeletal arm model to perform ten distinct types of movements at various speeds and directions, using visual and proprioceptive feedback. The trained network expresses two complementary forms of composition: an algebraic organization that groups tasks by kinematic and rotational structure to enable the flexible creation of novel tasks, and a sequential strategy that stitches learned extension and retraction motifs to produce new compound movements. Across tasks, population activity occupied a shared, low-dimensional manifold, whereas activity across task epochs resides in orthogonal subspaces, indicating a principled separation of computations. Additionally, fixed-point and dynamical-similarity analyses reveal reuse of dynamical motifs across kinematically aligned tasks, linking geometry to mechanism. Finally, we demonstrate rapid transfer to held-out movements via simple input weight updates, as well as the generation of target trajectories from composite rule inputs, without altering recurrent dynamics, highlighting a biologically plausible route to within-manifold generalization. Our framework sheds light on how the brain might flexibly perform a diverse range of movements through the use of shared low-dimensional manifolds and compositional representations.

Article activity feed