Agentic Sign Language: Balanced Evaluation and Adaptive Monitoring for Inclusive Multimodal Communication

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Sign languages are rich visual languages used by tens of millions of people worldwide, yet there is apersistent shortage of trained human interpreters. Recent work on small-vocabulary interpreters showsthat lightweight convolutional neural networks can recognise static finger-spelling with high accuracy [1].However, these prototypes are limited to isolated signs, depend on homogeneous training data andomit the complex grammar, facial expressions and body movements that convey meaning in continuoussign language. This paper proposes a comprehensive architecture that leverages recent advances inagentic artificial intelligence (AI), large language models (LLMs) and generative AI to deliver end-to-endsign language communication. Our design integrates multimodal data acquisition, spatio-temporal signrecognition, LLM-based translation, generative sign synthesis and an agentic orchestration layer. Weoutline data collection strategies, model architectures, training protocols, ethical considerations and aroadmap toward inclusive, real-time sign language translation and generation.

Article activity feed