Clinical AI Adoption Patterns Among U.S. Hospitals: Comparing Hospital Adoption and Implementation Using AHA Survey Data

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background : Hospitals are investing in clinical artificial intelligence (AI), yet most efforts to classify the types and extent AI uptake rely on ad hoc counts of tools or binary adoption indicators. A scalable, psychometrically grounded measure of “clinical AI implementation maturity” would strengthen benchmarking and improve research on determinants and consequences of adoption. Objective : To create a hospital-level measure of clinical AI use and adoption using staged implementation items from the American Hospital Association’s (AHA’s) Annual Survey. Methods : Ordered response items describing implementation stage across multiple clinical AI functions were modeled using an item response theory (IRT) framework. We evaluated the suitability of a unidimensional maturity construct by examining model fit indicators and the coherence of item behavior. Item discrimination parameters and ordered category thresholds were used to determine how well each AI function differentiated hospitals along the maturity continuum and whether response categories reflected interpretable progression. Results : The fitted model supported a stable maturity gradient with meaningful between-hospital variation. Item discrimination estimates indicated that certain AI functions were substantially more informative for distinguishing hospitals at different maturity levels than others, while some functions contributed limited differentiation. Category thresholds were generally ordered, supporting the interpretation of staged implementation as a progression. Threshold patterns suggested that adoption is not random across functions; some capabilities tend to appear earlier in maturity trajectories, whereas others cluster at later stages and better discriminate among higher-maturity hospitals. The resulting scores can be used as continuous measures with uncertainty intervals for comparisons and stratified analyses. Conclusions : Clinical AI implementation maturity can be measured as a latent construct using routine survey data and IRT methods. A psychometrically validated maturity score provides a foundation for benchmarking, risk adjustment in comparative studies, and policy-relevant monitoring of clinical AI diffusion across healthcare delivery systems.

Article activity feed