Mapping the Landscape of ASD-AI: Multimodal Gains, XAI Adoption, and Fairness Gaps - A Systematic Review

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI) is increasingly used to automate the detection of autism spectrum disorder (ASD), yet most research still focuses on maximizing classification accuracy. This narrow emphasis leaves a critical gap in the systematic synthesis of model explain-ability and algorithmic fairness, both essential for clinical adoption. Following PRISMA guidelines, this systematic review examines AI-based ASD detection from three perspectives: (i) the use of explain-ability techniques, (ii) performance gains from multi-modal fusion, and (iii) equity across demographic subgroups. A comprehensive search of major databases (including PubMed, IEEE Xplore, Scopus and the ACM Digital Library) yielded 10,117 records; after screening, 45 primary studies met the inclusion criteria. Three findings emerge: (1) when data are reasonably harmonized, multi-modal fusion (e.g., combining neuroimaging with phenotypic data) delivers accuracy gains of roughly 10-25\% over uni-modal approaches, but multi-site heterogeneity often diminishes these benefits; (2) Explainable AI (XAI) appears in 71.1\% of studies, yet methods such as SHAP and Integrated Gradients are largely used to validate group-level biomarkers rather than to provide actionable, patient level explanations required for clinical decision-making; and (3) despite growing awareness of algorithmic bias, only 40\% of studies conduct formal fairness audits, leaving underrepresented groups, especially women and racially diverse populations, at continued risk of healthcare disparities. Overall, despite accuracy improvements, integration of explain-ability and fairness remains nascent. We synthesize these insights into a framework that advocates a shift toward human-centered AI, jointly addressing accuracy, explain-ability, and fairness to enable clinically trustworthy and equitable translation.

Article activity feed