Developing and Validating the Attitudes Toward Large Language Models Scale

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With large language models (LLMs) now embedded in a wide range of everyday technologies, there is increasing interest in accurately measuring public attitudes toward these systems. General instruments for assessing attitudes toward artificial intelligence (AI) have contributed to the field but may not fully capture attitudes specific to large language models (LLMs). To address this gap, we adapted the 5-item Attitudes Toward Artificial Intelligence (ATAI) scale, a dual-component scale measuring AI Fear and AI Acceptance, and developed two new measures: the Attitudes Toward General LLMs (AT-GLLM) and Attitudes Toward Primary LLM (AT-PLLM) scales, assessing general attitudes and attitudes toward the LLM most frequently used by each respondent, respectively. Both instruments were specifically designed for the LLM context and validated in a UK sample of 526 adults (ages 18–45 years). Psychometric analyses supported a two-factor structure, strong measurement invariance across gender, and good internal reliability (Cronbach’s α = 0.776-0.793). Both scales demonstrated convergent and discriminant validity. These measures offer more precise tools for assessing attitudes toward LLMs compared to general AI attitude scales and can be used in various research and applied settings. Measuring attitudes can also enhance evaluations of user experience and support the monitoring of public engagement with this evolving technology.

Article activity feed