Decoding region-level visual functions from invasive EEG monkey data

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Decoding vision is an ambitious task as it aims to transform scalar brain activity into dynamic images with refined shapes, colors and movements. In familiar environments, the brain may trigger activity that resembles specific pattern, thereby facilitating decoding. Can an artificial neural network (ANN) decipher such latent patterns? Here, we explore this question using invasive electroencephalography data from monkeys. By decoding multi-region brain activity, ANN effectively captures individual regions’ functional roles as a consequence of minimizing visual errors. For example, ANN recognizes that regions V4 and LIP are involved in visual color and shape processing while MT predominantly handles visual motion, aligning with regional visual functions evident in the brain. ANN likely reconstructs vision by seizing hidden spike patterns, representing stimuli distinctly in a two-dimensional plane. Furthermore, during the encoding process of transforming visual stimuli into neuronal activity, optimal performance is achieved in regions closely associated with vision processing.

Article activity feed