Edge-Optimized AI-Powered Translator for Indian Sign Language (ISL)

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Sign Language translation bridges the communi- cation gap between the Deaf and the hearing communities. According to the World Health Organization over 430 million people worldwide experience hearing loss, including nearly 18 million Deaf individuals in India who rely on Indian Sign Language(ISL). However, continuous sign language translation presents significant challenges due to complex spatio-temporal dependencies, signer variations, and contextual ambiguity. This work proposes an efficient, edge-based deep learning framework for continuous ISL translation using landmark-based motion representations. By extracting structured hand, pose, and facial keypoints instead of raw RGB frames, the system reduces com- putational complexity while preserving linguistic information. An ensemble-based classifier with probabilistic modeling ensures robust recognition, while a lightweight student model enables mobile deployment. Experimental evaluation on the dataset demonstrates reliable recognition performance with low-latency inference, validating the suitability of the approach for real-time, resource-constrained environments.

Article activity feed