Soft Active EMG Interface for Machine Learning-Enabled Silent Speech Recognition
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Silent speech recognition (SSR) provides an alternative communication pathway where spoken sound is absent. However, conventional approaches are limited by the requirement of constant facial attachment, privacy concerns, and unstable signal acquisition. Herein, we propose a soft active electromyography (EMG) interface that enables word-level SSR via machine learning. Worn on the hand, the device uses a fingertip electrode that can be placed near the lips to acquire EMG signals only when desired. The interface integrates liquid-metal (LM) interconnects, transparent flexible printed circuit (FPC) electrodes, and elastomer encapsulation to ensure high mechanical stability during finger motion. A deep neural network, trained on these stable signals, achieved 94.3% accuracy in classifying a 30-word vocabulary, demonstrating robust linguistic discrimination. Furthermore, real-time drone control validated the practicality of this approach in noisy and privacy-sensitive environments where conventional voice recognition fails. This study highlights the potential of soft, wearable EMG systems as secure, intuitive human–machine interfaces.