Measuring trust in Artificial Intelligence with the N2pc component
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Efficient allocation of attentional resources is critical when humans collaborate with artificial intelligence (AI): they must focus on their task while monitoring the AI to intervene if it fails. Inefficient allocation—such as excessive monitoring or overreliance—can impair performance and cause critical errors. Whether humans appropriately offload attentional effort to an AI depends on factors such as the AI’s competency, the user’s expertise, and their propensity to trust. Yet, trust in AI is a latent variable that is difficult to measure. Here, we introduce an EEG-based approach to directly track how attentional resources are shared between a human and an AI. Participants performed a visual search task either alone or with an AI whose competency was varied. The N2pc component—an established neural marker of selective visual attention—was used to index attentional deployment. Results showed that the N2pc amplitude varied with the AI’s competency: smaller amplitudes indicated greater offloading and trust in the high- versus low-competency condition. The findings demonstrate that neurophysiological markers such as the N2pc can serve as implicit, non-disruptive measures of trust that inform about the cognitive mechanisms underlying trust calibration. The study thus establishes the N2pc as a promising marker for quantifying attention allocation in collaborative human-AI search tasks and extends its relevance from visual attention research to the study of trust in automation.