Representation Transfer via Invariant Input-driven Neural Manifolds for Brain-inspired Computations
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Domain adaptation is a core challenge for embodied AI deployed in unpredictable, noisy environments. Conventional deep models degrade under domain shifts and require costly retraining. Inspired by biological brains, we propose a modular framework where each module is a recurrent neural network pretrained via a simple, task-agnostic protocol to learn robust, transferable features. This shapes stable yet flexible representations as invariant input-driven continuous attractor manifolds embedded in high-dimensional latent space, supporting robust transfer and resilience to noise. At deployment, only a lightweight adapter needs training, enabling rapid few-shot adaptation. Evaluated on the DVS Gesture benchmark and a custom RGB rehabilitation dataset, our framework matches or surpasses leading C3D and ViViT models while using ten times fewer parameters and only one training epoch. By unifying biologically inspired attractor dynamics with cortical-like modular composition, our approach offers a practical path toward robust, continual adaptation for real-world embodied AI.