Beyond retinotopy: exploiting native visual representations in cortical neuroprostheses for vision loss remediation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Cortical prosthetic systems offer a promising path to restoring vision in blindness by stimulating neurons in the visual cortex to evoke visual percepts. To effectively encode visual information, however, it is essential to target neurons based on their functional encoding properties. Current stimulation protocols focus on retinotopic information, ignoring other key encoding properties, and therefore fail to reproduce complex visual percepts. We demonstrate that incorporating orientation selectivity alongside retinotopy to guide stimulation dramatically improves fidelity of the evoked activity to the underlying neural code. We propose a Bottlenecked Rotation-Equivariant CNN (BRCNN) and demonstrate that neural responses can be predicted to a large degree using only retinotopy and orientation preference. Using this model, we design a retinotopy-and-orientation aware stimulation protocol and validate it in a state-of-the-art large-scale optogenetic stimulation simulation framework of primary visual cortex. Our protocol elicits neural activity patterns exhibiting substantially higher correlation with natural vision responses compared to retinotopic-only approaches.