Reluctant to ReLU: Uncontrolled Connectivity Pruning Underlying Trainable Excitatory-Inhibitory Recurrent Neural Networks
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Song et al. (2016) proposed a framework to build trainable excitatory-inhibitory recurrent neural networks with user-specified biological mechanisms. These models allow researchers to simulate a miriad of neural, cognitive and behavioral hypothesis through neural circuits, making them one of the most powerful simulation tools for theoretical cognitive modeling nowadays. In the present study we find that using ReLU on weights (not nodes) to set the excitatory and inhibitory populations (Dale's principle) leads to a catastrophic loss of connections beyond user control, compromising its reproducibility and comparability. We explain the reasons behind this connectivity pruning and study its impact through a reproduction of a recent simulation study about activity-silent working memory based on Song et al. (2016)'s framework. We show that ReLU leads to a loss of up to 95% of connections, makes connectivity patterns dependent on the task and biological features of the model, impairs training and modifies neural dynamics in unpredictable ways. Moreover, we show it makes training substantially less efficient and prone to performance issues. As an alternative, we found that Absolute Value function to rectify weights is notably more efficient and avoids connectivity pruning.