AURA: An AI-Powered Multimodal Prototype for Adaptive Apraxia of Speech Therapy and Communication Support

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Apraxia of Speech (AOS) is a motor speech disorder that significantly  limits communication and requires intensive, long-term therapy. Access to con sistent treatment is often constrained by shortages of speech-language  pathologists, high costs, and limited opportunities for continuous monitoring out side clinical settings. Recent advances in Artificial Intelligence (AI) provide new  opportunities to support scalable and personalized speech therapy.  This paper presents AURA (Adaptive Understanding and Relearning Assistant  for Apraxia), a multimodal AI framework designed to support speech therapy,  progress monitoring, and communication for individuals with AOS. The system  integrates speech analysis, machine learning–based error detection, reinforce ment learning for adaptive therapy, and multimodal Augmentative and Alterna tive Communication (AAC) support. An initial research prototype has been de veloped to demonstrate the feasibility of the proposed architecture and workflow.  We describe the system architecture, its alignment with evidence-based motor  learning principles, and a phased evaluation plan involving expert review, simu lated testing, and pilot studies. The proposed framework aims to improve therapy  accessibility, engagement, and data-driven intervention for individuals with  AOS.

Article activity feed