Inner Speech Classification Using Deep Learning Techniques for EEG-Based Brain-Computer Interfaces

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Brain Computer Interface (BCI) is a technology that enables direct brain-to-external device communication without the use of muscles. Inner speech refers to the silent process of mentally imagining or "speaking" to oneself without any vocal output, which is distinct from overt speech. Inner speech classification is an emerging BCI paradigm that seeks to decode these silent thoughts or imagined words from brain signals, typically captured via electroencephalogram signals (EEG). A significant difficulty for this type of BCI is low classification accuracy when dealing with multi-class tasks. This work talks about a full-scale study on the classification of inner speech using EEG signals. We exploit sophisticated deep learning techniques, involving long short-term memory (LSTM), bidirectional LSTM (BLSTM), Transformer, and an ensemble method via voting classifier, with the aim to explore the ability of these architectures in decoding inner speech. Our experiments were based on two benchmark datasets with four classes: Thinking Out Loud and BCI 2a, involving different topics and situations. The proposed methodology achieved high classification accuracies of 98.81% and 90.58% with the transformer model on the Thinking Out Loud and BCI 2a datasets, respectively, which evidences the success of our approach. In addition, features generated by our models remarkably decrease intraclass variability while preserving vital inter-class differences, as illustrated by t-SNE visualizations. More specifically, the results show that the algorithm developed within paper may be confidently applied for multi-class classification of speech using EEG signals

Article activity feed