A theory of cerebellar learning as a spike-based reinforcement learning in continuous time and space
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The cerebellum has been considered to perform error-based supervised learning via long-term depression (LTD) at synapses between parallel fibers and Purkinje cells (PCs). Since the discovery of multiple synaptic plasticity other than LTD, recent studies have suggested that synergistic plasticity mechanisms could enhance the learning capability of the cerebellum. Indeed, we have proposed a concept of cerebellar learning as a reinforcement learning (RL) machine. However, there is still a gap between the conceptual algorithm and its detailed implementation. To close this gap, in this research, we implemented a cerebellar spiking network as an RL model in continuous time and space, based on known anatomical properties of the cerebellum. We confirmed that our model successfully learned a state value and solved the mountain car task, a simple RL benchmark. Furthermore, our model demonstrated the ability to solve the delay eyeblink conditioning task using biologically plausible internal dynamics. Our research provides a solid foundation for cerebellar RL theory that challenges the classical view of the cerebellum as primarily a supervised learning machine.