Automated speech artefact removal from MEG data utilizing facial gestures and mutual information

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The ability to speak is one of the most crucial human skills, motivating neuroscientific studies of speech production and speech-related neural dynamics. Increased knowledge in this area, allows e.g., for development of rehabilitation protocols for language-related disorders. While our understanding of speech-related neural processes has greatly enhanced owing to non-invasive neuroimaging techniques, the interpretations have been limited by speech artefacts caused by the activation of facial muscles that mask important languagerelated information. Despite earlier approaches applying independent component analysis (ICA), the artefact removal process continues to be time-consuming, poorly replicable and affected by inconsistencies between different observers, typically requiring manual selection of artefactual components. The artefact component selection criteria have been variable, leading to non-standardized speech artefact removal processes. To address these issues, we propose here a pipeline for automated speech artefact removal from MEG data. We developed an ICA-based speech artefact removal routine by utilizing EMG data measured from facial muscles during a facial gesture task for isolating the speech-induced artefacts. Additionally, we used mutual information (MI) as a similarity measure between the EMG signals and the ICA-decomposed MEG to provide a feasible way to identify the artefactual components. Our approach efficiently and in an automated manner removed speech artefacts from MEG data. The method can be feasibly applied to improve the understanding of speech-related cortical dynamics, while transparently evaluating the removed and preserved MEG activation.

Article activity feed