Using artificial neural networks to reveal the human confidence computation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Humans can evaluate the accuracy of their decisions by providing confidence judgements. Traditional cognitive models have tried to capture the mechanisms underlying this process but remain mostly limited to two-choice tasks with simple stimuli. How confidence is given for naturalistic stimuli with multiple choice alternatives is not well understood. We recently developed a convolutional neural network (CNN) model – RTNet – that exhibits several important signatures of human decision making and outperforms existing alternatives in predicting human responses. Here, we use RTNet’s image computability to test four different confidence strategies on a task involving digit discrimination with eight alternatives. Specifically, we tested confidence strategies that consider the entire evidence distribution across choices (Softmax and Entropy), the difference between the winning choice and the second-best choice (Top2Diff), or only the evidence for the winning choice (Positive Evidence). Across 60 subjects, the Top2Diff model provided the best quantitative and qualitative fits to the data and the best predictions of humans’ image-by-image confidence ratings. These results support the notion that human confidence is based on the difference of evidence between the top two choices and demonstrate that Softmax – currently the standard way of deriving confidence from CNNs – is inadequate for modelling human confidence computations.

Article activity feed