Title: AI, Autism, and the Architecture of Voice: From Engineered Exclusion to Designed Dignity

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper conceptualizes engineered exclusion—the predictable sidelining of disabled users that results from choices about data provenance, model objectives, and evaluation practices within AI systems. We frame this phenomenon through the lived experiences of minimally and nonspeaking autistics whose multimodal communication profiles challenge the speech-centered defaults of current AI pipelines. “Nonspeaking” is not a single condition or an absence of language but a spectrum encompassing users of augmentative and alternative communication (AAC), from text to gesture, rhythmic movement, and partial vocalizations. Communicative profiles vary across time with fatigue, anxiety, sensory load, and motor planning demands, yet design abstractions erase this variability. By tracing exclusionary mechanisms across speech recognition, text-to-speech, plain-language generation, and interface design, we identify how inequities are structurally produced and propose measurable designed dignity metrics for evaluation and governance. We argue that accessibility must be treated as a core dimension of AI ethics—on par with fairness, privacy, and safety. Re-engineering AI for designed dignity requires systems that recognize embodied, multimodal, and state-dependent forms of communication, expanding what counts as valid signal and responsible innovation.

Article activity feed