Development and validation of the Trust in AI Scale (TAIS)
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In everyday life, users increasingly interact and communicate with AI systems. Despite the importance of trust in AI as an influencing factor for this interaction, there is a shortage of validated scales to reliably measure users’ trust. In this paper, we present a theory-driven development and validation of the Trust in AI scale (TAIS) that consists of the subdimensions ability, integrity, transparency, unbiasedness, vigilance, and global trust. To validate the scale, we conducted two studies. In study 1 (N = 883 participants), we derived 57 items from theory and existing scales for which an exploratory factor analysis resulted in a 30 item scale. In study 2 (N = 1204 participants), we tested the psychometric quality of the scale through confirmatory factor analysis for ordinal data. Employing a bifactor model with global trust as the higher-order factor, our results confirm the six-factor structure. Correlational results of context variables and related scales support the convergent validity of the scale. Results show that existing scales rather correlate with the global trust factor but less with specific factors (especially vigilance) - indicating that the TAIS scale helps to uncover new facets of trust and thereby goes beyond what existing, less validated scales can provide.