Predictors and determinants of public trust in AI and software as a medical device for healthcare in UK: Evidence from the RADIANT Voices Study
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Objective To identify sociodemographic, experiential and attitudinal determinants of public trust in artificial intelligence (AI) and software as a medical device (SaMD) in UK healthcare, focusing on trust in AI-assisted clinical decision-making under human oversight. Design Cross-sectional online survey with prespecified outcomes. Setting United Kingdom; data collected online 3 October-7 November 2025. Participants A community sample of 1,468 adults (mean age 44.8 year; 49.8% female). Prior AI use was common (79.4%). eHealth literacy was measured using eight-item eHEALS scale (α = 0.91). Main outcome measures Primary outcome was trust in AI-assisted clinical decision-making (5-point-scale). Secondary outcomes included the within-person trust gap between AI-assisted and AI-only decisions, governance preferences, accountability for AI-related harm, and trust in regulators and developers. Results Trust was strongly contingent on human oversight. High trust was reported by 62.7% for AI-assisted decisions (mean 3.63, SD 0.95) but by only 10.5% for AI-only decisions (mean 2.15, SD 1.01), yielding a mean within-person trust gap of 1.47 points. Almost all participants (95.2%) preferred a health and care professional as the final decision-maker. Support for governance was high: 92.2% wanted disclosure whenever AI is used, 83.4% opposed unsupervised AI advice, and 79.4% supported stronger regulation. Higher trust in AI-assisted decisions was independently associated with prior AI use, higher eHealth literacy and older age. Conclusions Public confidence in healthcare AI is conditional. Trust is higher when clinicians retain accountability, AI use is transparently disclosed, and performance claims are evidence-based, underscoring the need to embed disclosure, human oversight and robust governance in AI/SaMD deployment.