AI-augmented decision-making in face matching: Comparing concurrent and non-concurrent advice presentation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
A primary aim of human-AI teaming is to achieve better collaborative performance than either can achieve alone. Despite considerable efforts in this direction, issues such as overreliance of users on decision aids continue to be a challenge which prevent this. In this study, we evaluated the potential of non-concurrent advice presentation as a strategy to reduce overreliance in a face matching task. We conducted three pre-registered experiments examining (a) on-demand binary advice, (b) on-demand similarity ratings, and (c) conditional advice (i.e. advice presented only if participants’ initial unaided is different from the AI prediction), compared to concurrent advice. Across all experiments, we did not find significant differences in the overall performance of participants in the concurrent vs. experimental conditions. But, we found that participants followed AI advice more when they demanded it. Conversely, when they demanded similarity ratings, they followed advice less. Thus on-demand similarity ratings reduced overreliance on AI compared to concurrent similarity ratings presentation. However, overall, similarity ratings were not more helpful compared to basic advice. We also found that participants were less likely to follow AI advice when presented after their initial unaided decision contradicted the AI prediction; and, were more confident in rejecting incorrect advice, but not as confident when accepting correct advice. Overall, non-concurrent paradigms have potential to reduce overreliance, but at the cost of underreliance on correct advice.