A Unified Theory of Response Sparsity and Variability for Energy-Efficient Neural Coding

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding how cortical neurons use dynamic firing patterns to represent sensory signals is a central challenge in neuroscience. Decades of research have shown that cortical neuronal activities exhibit high variance, typically quantified by the coefficient of variation (CV), suggesting intrinsic randomness. Conversely, substantial evidence indicates that cortical neurons display high response sparseness, indicative of efficient encoding. The apparent contradiction between these neural coding properties—stochastic yet efficient—has lacked a unified theoretical framework. This study aims to resolve this discrepancy. We conducted a series of analyses to establish a direct relational function between CV and sparseness, proving they are intrinsically correlated or equivalent across different statistical distributions in neural activities. We further derive a function showing that both irregularity and sparsity in neuronal activities are positive functions of energy-efficient coding capacity, quantified by Information-Cost Efficiency (ICE). This suggests that the observed high irregularity and sparsity in cortical activities result from a shared mechanism optimized for maximizing information encoding capacity while minimizing cost. Furthermore, we introduce a CV-maximization algorithm to generate kernel functions replicating the receptive fields of the primary visual cortex. This finding indicates that the neuronal functions in the visual cortex are optimal energy-efficient coding operators for natural images. Hence, this framework unifies the concepts of irregularity and sparsity in neuronal activities by linking them to a common mechanism of coding efficiency, offering deeper insights into neural coding strategies.

Article activity feed