Hybrid_Asl: Cross-Domain Transfer Learning for High-Accuracy American Sign Language Recognition

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Communication barriers for individuals with hearing impairments persist due to limited assistive resources. This paper introduces Hybrid_ASL, a novel deep learning model leveraging cross-domain transfer learning to classify American Sign Language (ASL) hand gestures with high accuracy. Built on a transfer learning framework, Hybrid_ASL adapts knowledge from diverse visual domains to optimize its architecture for ASL recognition. Trained on a dataset of 87,000 ASL images, the model underwent iterative fine-tuning to balance accuracy and computational efficiency. Comparative experiments against state-of-the-art architectures, including convolutional neural networks and vision transformers, demonstrate that Hybrid_ASL achieves an exceptional accuracy of 99.98%, with matching precision, recall, and F1-score, while maintaining low architectural complexity. These results highlight the efficacy of transfer learning and model adaptation in developing robust assistive technologies, paving the way for improved accessibility and quality of life for the hearing-impaired community.

Article activity feed