Classification of Visual Imagery and Imagined Speech EEG based Brain Computer Interfaces using 1D Convolutional Neural Network
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Non-invasive brain-computer interfaces (BCI) utilising electroencephalogram (EEG) signals are a current popular, affordable and accessible method for establishing communication paths between the mind and external devices. However, the challenges faced are inter-subject variability, BCI illiteracy and poor machine learning decoding performance. Two emerging intuitive mental paradigms, Visual Imagery (VI) and Imagined Speech (IS) show promise to optimise the development of non-invasive BCIs, which involves the extraction of corresponding neural patterns during the imagined tasks. This study took a comprehensive user-centric approach to build on the current foundation of knowledge on VI and IS EEG-BCIs utilising an adapted 1D-CNN to optimise the classification decoding performance. Twenty healthy participants were assessed for their ability to visualise imagery in their minds and performed the VI and IS mental paradigms in two class conditions “push” and “relax”. It was shown that alpha and beta suppression was observed during the “push” condition of VI compared to the “relax” condition, and those that scored higher in the VVIQ had better VI classification accuracy than those who did not. The adapted 1D-CNN model performed well for classification between the two classes “push” and “relax” at 89.3% and 77.87% performance accuracy for VI and IS, respectively. These findings contribute to the current body of work on VI BCI, that it is a dynamic and plausible option compared to standard BCI paradigms, and VI BCI illiteracy could potentially be controlled by VVIQ. It also demonstrated the potential of the 1D-CNN model in classification of VI and IS EEG-BCIs.