Measuring perceived risk in intelligent system interaction: Development and validation of a multidimensional scale across three safety-critical domains

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Intelligent systems—from autonomous vehicles to decision support tools—are reshaping everyday activities, yet their adoption is often tempered by users’ perceived risk. Perceived risk significantly influences users' decisions to adopt or reject such systems, highlighting the critical need to understand and mitigate them. However, existing measurement tools for assessing perceived risk remain inadequate, limiting effective intervention and research efforts. To address this gap, we developed and validated a multidimensional scale of perceived risk in intelligent system interaction, focusing on three representative safety-critical domains: autonomous vehicles, medical decision support systems, and financial decision support systems. Step 1 involved item generation based on literature analysis. Step 2 employed exploratory factor analysis to refine the scale structure (N = 644). In Step 3, confirmatory factor analyses and additional validation procedures (N = 639) confirmed a robust six-factor model: Third-Party Risk, Operational Cost Risk, Social Appraisal Risk, Probability and Severity Risk, Affective Risk, and Performance Integrity Risk. The final scale demonstrated strong psychometric properties, including reliability, discriminant validity, and predictive validity for user trust, satisfaction, and intention to use. This scale offers researchers and practitioners a rigorously validated instrument for diagnosing perceived risk in safety‑critical intelligent systems and guiding targeted mitigation strategies that can enhance acceptance, usability.

Article activity feed