ASLAD-190K: Arabic Sign Language Alphabet Dataset consisting of 190,000 Images

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper presents ASLAD-190K, a comprehensive dataset comprising 190,000 annotated RGB images representing 32 distinct Arabic Sign Language letters and gestures. Designed to support machine learning and computer vision research, ASLAD-190K offers a diverse range of images collected under varying lighting conditions, distances, and backgrounds to simulate real-world complexities. Random Under-Sampling was applied to address dataset imbalance, yielding a balanced subset of 96,000 images. Additionally, key hand landmarks and 26 geometric angles were extracted to enhance gesture recognition. The dataset is publicly available at https://data.mendeley.com/datasets/2fgpn5dwgc/2 aiming to accelerate the development of assistive technologies for the Arabic-speaking deaf and hard-of-hearing community.

Article activity feed