Facial Expression-Driven Rehabilitation Robotics: A Machine Learning Approach with Stretchable Sensors

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Wearable sensor technology has shown significant promise in advancing personalized healthcare and rehabilitation. This study presents the implementation of stretchable sensors for facial expression recognition (FER) to control an upper limb rehabilitation robot. Traditional rehabilitation approaches, which rely heavily on therapist-administered exercises, face limitations such as variability in patient engagement and environmental constraints. To address these issues, we developed a low-cost, flexible sensor system capable of detecting subtle facial muscle deformations and translating them into control commands via a Random Forest machine learning algorithm. Our system classifies four emotional states — Neutral, Happy, Sad, and Disgust — with a classification accuracy of 92.4\%. Real-time performance tests on five subjects controlling an elbow rehabilitation robot demonstrated an average accuracy of 75\% for FER. Motor speed adjustments, driven by recognized expressions, dynamically adapt to patient comfort levels, thereby personalizing the rehabilitation process. The stretchable sensor system offers advantages over traditional methods like EMG and computer vision, including greater reliability, low computational demands, and reduced susceptibility to noise.This study highlights the potential of emotion-driven adaptive rehabilitation robots in improving patient outcomes and engagement. Future work will explore refining sensor placement, enhancing machine learning models, and extending the system to other rehabilitation contexts. Integrating such intelligent wearable systems marks a significant step toward more accessible and empathetic tele-rehabilitation solutions.

Article activity feed