Continuous Real-Time Decoding in Wearable Human-Machine Interfaces: Advancing Neuroprosthetic Controls with Optomyography

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Wearable devices are gaining popularity for enabling human-machine interfaces, such as typing, through wristbands that convert surface electromyographic (sEMG) signals into computer commands. However, traditional sEMG methods face several limitations, including issues with wet vs. dry electrodes, challenges in sensor fixation, bias toward superficial muscles, signal cross-talk, instability over time, and susceptibility to electrical and mechanical artifacts. In this study, we present an alternative approach to sampling and decoding muscle contractions using optomyography (OMG). Our OMG system, a wristband with 50 data channels, facilitates various computer mouse-like controls. Decoding is performed by a compact yet effective fully-connected neural network trained on data from a center-out task involving hand gestures. Eight healthy participants and one amputee successfully mastered OMG-based controls, including target acquisition at different screen locations and playing Tetris. Performance improvements with training were assessed using metrics such as target acquisition time relative to distance, Euclidean deviation from a straight trajectory, time differences between actual and optimal trajectories, and time spent near the target before acquisition. This work demonstrates the potential of next-generation wearable devices to exceed traditional approaches in performance, accuracy, stability, and versatility.

Article activity feed