Research Project: Sign Language Translator Aurora

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study presents the design and implementation of an embedded system for real-time translation of Peruvian Sign Language into both text and synthesized speech. The system employs a microcontroller equipped with a camera module to capture sign language gestures. These gestures are processed using convolutional neural networks trained with the open-source platforms TensorFlow and Keras, enabling accurate recognition and translation into words. The resulting text is then transferred to a second embedded unit, which converts the output into audible speech using a speaker module. Additionally, a secondary system is integrated using a micro- controller and a proximity sensor to enhance user interaction. When the presence of a person is detected nearby, an indicator light is activated to signal system readiness. This added feature improves usability in public or academic environments. The project aims to promote the social inclusion of individuals with hearing impairments by offering a low-cost and accessible solution to overcome communication barriers. By combining computer vision, machine learning, and embedded systems, the proposed system contributes to the development of assistive technology that can enhance autonomy, communication, and participation for people who use sign language. It is particularly suitable for use in classrooms, meetings, and other environments where inclusive communication tools are essential.

Article activity feed