A deep learning framework for understanding cochlear implants

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Sensory prostheses replace dysfunctional sensory organs with electrical stimulation but currently fail to restore normal perception. Outcomes may be limited by stimulation strategies, neural degeneration, or suboptimal decoding by the brain. We propose a deep learning framework to evaluate these issues by estimating best-case outcomes with task-optimized decoders operating on simulated prosthetic input. We applied the framework to cochlear implants – the standard treatment for deafness – by training artificial neural networks to recognize and localize sounds using simulated auditory nerve input. The resulting models exhibited speech recognition and sound localization that was worse than that of normal hearing listeners, and on par with the best human cochlear implant users, with similar results across the three main stimulation strategies in current use. Speech recognition depended heavily on the extent of decoder optimization for implant input, with lesser influence from other factors. The results identify performance limits of current devices and demonstrate a model-guided approach for understanding the limitations and potential of sensory prostheses.

Article activity feed