Word-level Afan Oromo Sign Language Recognition Using Deep Learning Approach

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

S ign language is a primary communication mode for the hearing-impaired community, yet barriers persist due to limited sign language proficiency among the hearing population and a scarcity of effective translation tools. This work addresses the critical need for improved communication accessibility by developing a real-time Afan Oromo sign language recognition. A primary challenge lies in the absence of comprehensive research on Afan Oromo sign language recognition and translation. To bridge this gap, this study proposes a novel approach utilizing the YOLOv10 model, enhanced for sign language recognition and translation. By leveraging a diverse dataset of 70 common sign language words, we perform data pre-processing steps such as frame extraction, resizing, cropping, flipping, normalization and data splitting to optimize model performance. The core contribution of this research is the development of a robust sign language recognition model capable of accurately translating Afan Oromo signs into text. We achieved impressive results with a Total Average Precision of 94.12%, Recall of 95.01%, and mAP@50 of 90.03% on the YOLOv10 model. This would enable accessible translation tools to be developed for the Afaan Oromo sign language community which could contribute towards improving communication and in- clusivity in their use of signing as a mode of expression.

Article activity feed