Generative AI for Tactile Accessibility: A Systematic Literature Review of Emerging Methods and Gaps
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Tactile graphics, tactile maps, and vibrotactile cues are essential for supporting access to visual information among blind and low-vision (BLV) users, yet current production workflows remain slow, manual, and highly specialized. Recent advances in generative artificial intelligence (GenAI) offer new possibilities for automating or augmenting these workflows, but the landscape of existing methods is fragmented and difficult to navigate. Research spans several model families, including generative adversarial networks, stable diffusion models, and multimodal language-vision models. These systems vary widely in how they are designed, applied, and evaluated. This systematic review examines GenAI approaches for tactile accessibility published between 2014 and 2025. We include only methods that produce tactile-relevant outputs, such as embossable graphics or vibrotactile signals, or that contribute a generative step within a tactile pipeline. The review maps how these models are instantiated, the stages of the tactile workflow they target, and the evaluation practices they employ. The analysis identifies consistent challenges, including oversmoothing, clutter that reduces haptic legibility, limited generalization, high computational demands, and scarce BLV-centered evaluation. The review concludes by outlining opportunities for tactile-first metrics and practical, low-resource generative pipelines, and provides 1 a curated, publicly available resource that consolidates papers and practitioner tools.