Transformer Networks Enable Robust Generalization of Source Localization for EEG Measurements

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

An electroencephalogram (EEG) is an electrical measurement of brain activity using electrodes placed on the scalp surface. After EEG measurements are collected, numerical methods and algorithms can be employed to analyze these measurements and attempt to identify the source locations of brain activity. These traditional techniques often fail for measured data that are prone to noise. Recent techniques have employed neural network models to solve the localization problem for various use cases and data setups. These approaches, however, make underlying assumptions that make it difficult generalize the results past their original training setups. In this work, we present a transformer-based model for single- and multi-source localization that is specifically designed to deal with difficulties that arise in EEG data. Hundreds of thousands of simulated EEG measurement data are generated from known brain locations to train this machine learning model. We establish a training and evaluation framework for analyzing the effectiveness of the transformer model by explicitly considering the source region density, noise levels, drop out of electrodes, and other factors. Across these vast scenarios, the localization error of the transformer model is consistently lower than the other classical and machine learning approaches. Additionally, we perform a thorough ablation study on the network configuration and training pipeline. The code and data used in this work will be made publicly available upon publication.

Article activity feed