Continuous Reaching and Grasping with a BCI Controlled Robotic Arm in Healthy and Stroke-Affected Individuals

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advancements in signal processing techniques have enabled non-invasive Brain-Computer Interfaces (BCIs) to control assistive devices, like robotic arms, directly with users’ EEG signals. However, the applications of these systems are currently limited by the low signal-to-noise ratio and spatial resolution of EEG from which brain intention is decoded. In this study, we propose a motor-imagery (MI) paradigm, inspired by the mechanisms of a computer mouse, that adds an additional “click” signal to an established 2D movement BCI paradigm. The additional output signal increases the degrees of freedom of the BCI system and may enable more complex tasks. We evaluated this paradigm using deep learning (DL) based signal processing on both healthy subjects and stroke-survivors in online BCI tasks derived from two potential applications: clicking on virtual targets and moving physical objects with a robotic arm in a continuous reach-and-grasp task. The results show that subjects were able to control both movement and clicking simultaneously to grab, move, and place up to an average of 7 cups in a 5-minute run using the robotic arm. The proposed paradigm provides an additional degree of freedom to EEG BCIs, and improves upon existing systems by enabling continuous control of reach-and-grasp tasks instead of selecting from a discrete list of predetermined actions. The tasks studied in these experiments show BCIs may be used to control computer cursors or robotic arms for complex real-world or clinical applications in the near future, potentially improving the lives of both healthy individuals and motor-impaired patients.

Article activity feed