Exploring Spatial Inequities and Livability: A mixed-methods study using artificial intelligence and community insights
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Rapid urbanization has intensified health and social inequities through the uneven distribution of infrastructure, services, and public space. In St. Louis, Missouri (USA), these disparities remain geographically concentrated and rooted in long-standing patterns of racial segregation and disinvestment. This study examines how spatial inequities shape urban livability by integrating scalable artificial intelligence (AI) methods with community-based insights. We applied a vision–language model, supported by a large language model, to classify micro-scale built-environment features from 7,848 Google Street View segments, and conducted semi-structured interviews with residents and stakeholders. The AI analysis revealed baseline infrastructure such as street lighting (85%) and continuous pavements (77%), but very low coverage of universal-design features: curb ramps (9.7%), pedestrian crossings (7.1%), and walk signals (4.3%). These deficits were significantly associated with lower median incomes and higher proportions of non-Hispanic Black residents. Qualitative findings confirmed and contextualized these patterns, linking them to historical redlining, fragmented governance, and everyday experiences of constrained mobility and safety. By combining scalable AI tools with community narratives, this study demonstrates that spatial inequities are both visible in the physical streetscape and embodied in daily life. Methodologically, it advances the use of VLM and LLM-enabled pipelines for urban audits; conceptually, it reframes livability metrics through the lens of spatial equity; and practically, it provides a replicable framework for equity-centered planning.