Decoding in the fourth dimension: Classification of temporal patterns and their generalization across locations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Neuroscience research has increasingly used decoding techniques, in which multivariate statistical methods identify patterns in neural data that allow the classification of experimental conditions or participant groups. Typically, the features used for decoding are spatial in nature, including voxel patterns and electrode locations. However, the strength of many neurophysiological recording techniques such as electroencephalography or magnetoencephalography is in their rich temporal, rather than spatial, content. The present report proposes a new decoding method that relies on the time information contained in neural time series. This information is then used in a subsequent step, generalization across location (GAL), which characterizes the relationship between sensor locations based on their ability to cross-decode. Two datasets are used to demonstrate usage of this method, referred to as time-GAL, involving (1) event-related potentials in response to affective pictures and (2) steady-state visual evoked potentials in response to aversively conditioned grating stimuli. In both cases, experimental conditions were successfully decoded based on the temporal features contained in the neural time series. Cross-decoding occurred in regions known to be involved in visual and affective processing. We conclude that the time-GAL approach holds promise for analyzing neural time series from a wide range of paradigms and measurement domains providing an assumption-free method to quantifying differences in temporal patterns of neural information processing and whether these patterns are shared across sensor locations.

Article activity feed