Assessing healthy distrust in human-AI interaction: Interpreting changes in visual attention.
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
When humans interact with artificial intelligence (AI), one desideratum is appropriate trust. Typically, appropriate trust encompasses that humans trust AI except when they either explicitly notice AI errors or when they are suspicious that errors could be present. So far, appropriate trust or related notions have been mainly investigated by assessing trust and reliance. In this contribution, we argue that these assessments are not sufficient to measure the complex aim of appropriate trust and the related notion of healthy distrust. We introduce and test the perspective of visual attention as an additional indicator for appropriate trust and draw conceptual connections to the notion of healthy distrust.To test the validity of our conceptualization, we formalize visual attention using TVA and measure its properties potentially relevant to appropriate trust and healthy distrust in an image classification task. Thereby, we investigate participants’ attentional capacity and attentional weight towards correct and incorrect classifications. We observe that misclassifications reduce attentional capacity compared to correct classifications. However, our results do not indicate that this reduction is beneficial for a subsequent judgment of the classifications. The attentional weighting is not affected by the classifications’ correctness but by the difficulty of categorizing the stimuli themselves. These results, their implications, and the limited potential for using visual attention as an indicator of appropriate trust and healthy distrust are discussed.