Arabic Sign Language (ARSL) Recognition and Translation into Text
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Millions of deaf and hard-of-hearing individuals around the Arab world rely on sign language, yet interpreter access remains limited especially given the diversity among Arabic Sign Language (ArSL) dialects. Leveraging recent strides in deep learning, our study explores scalable, real-time recognition methods to bridge this communication gap. We trained and evaluated two models MobileNetV2 with GRU and Inflated 3D ConvNet (I3D) on the publicly available “Arabic Sign Language Dataset”. This dataset, sourced from Kaggle, includes over 8,400 labeled video clips across 20 isolated ArSL classes, contributed by 72 participants. We partitioned it into training (6,749 clips), validation (844), and test (844) sets in an 80/10/10 split. The MobileNetV2+GRU model outperformed with 96% validation accuracy and 97\% test accuracy, alongside over 95% in both precision and recall. These results demonstrate that lightweight, mobile-friendly architectures can deliver near–state-of-the-art performance, offering a promising step toward making ArSL universally accessible.