Inferring sources of suboptimality in perceptual decision making using a causal inference task
Curation statements for this article:-
Curated by eLife
eLife assessment
This study of human perceptual decision-making provides important insights into the sources of suboptimality in human inference. The authors provide solid evidence by combining psychophysics in an audiovisual causal inference task with detailed modeling of the observed behavior. Additional control analyses should be carried out to validate the identifiability of distinct suboptimalities using the authors' modeling framework, and the generalizability of their findings in other conditions should be tested or discussed more explicitly.
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (eLife)
Abstract
Perceptual decision-making has been extensively modeled using the ideal observer framework. However, a range of deviations from optimality demand an extension of this framework to characterize the different sources of suboptimality. Prior work has mostly formalized these sources by adding biases and variability in the context of specific process models but are hard to generalize to more complex tasks. Here, we formalize suboptimalities as part of the brain’s probabilistic model of the task. Data from a traditional binary discrimination task cannot separate between different kinds of biases, or between sensory noise and approximate computations. We showed that this was possible using a recently developed causal inference task in which observers discriminated auditory cues in the presence of choice-uninformative visual cues. An extension of the task with different stimulus durations provided evidence for an increase in the precision of the computations with stimulus duration, separate from a decrease in observation noise.
Article activity feed
-
eLife assessment
This study of human perceptual decision-making provides important insights into the sources of suboptimality in human inference. The authors provide solid evidence by combining psychophysics in an audiovisual causal inference task with detailed modeling of the observed behavior. Additional control analyses should be carried out to validate the identifiability of distinct suboptimalities using the authors' modeling framework, and the generalizability of their findings in other conditions should be tested or discussed more explicitly.
-
Reviewer #1 (Public Review):
In this manuscript, the authors present a computational framework based on Bayesian observer models that parameterises several different sources of noise and bias in perceptual decision-making. The authors show that these sources of suboptimality cannot be dissociated in many typical decision-making tasks. They present an analysis of two previously published sets of experimental data, where the experimental design should allow them to dissociate suboptimalities parameterised in the model. They fit various versions of the model including different suboptimalities and in general show that including the suboptimalities improves the fit, depending on whether the data is aggregated across participants or not.
The major strengths of the methods and results include 1. The clear theoretical delineation of different …
Reviewer #1 (Public Review):
In this manuscript, the authors present a computational framework based on Bayesian observer models that parameterises several different sources of noise and bias in perceptual decision-making. The authors show that these sources of suboptimality cannot be dissociated in many typical decision-making tasks. They present an analysis of two previously published sets of experimental data, where the experimental design should allow them to dissociate suboptimalities parameterised in the model. They fit various versions of the model including different suboptimalities and in general show that including the suboptimalities improves the fit, depending on whether the data is aggregated across participants or not.
The major strengths of the methods and results include 1. The clear theoretical delineation of different forms of suboptimalities may help to guide research understanding them on the behavioural and neural levels. 2. The attention to scientific rigor in model fitting, including the use of power analysis and corrections for the number of model parameters. 3. Clear figures that are helpful in understanding the model.
The major weaknesses of the methods and results include 1. The lack of model/parameter recovery analysis shows the extent to which the model can separate sources of suboptimality against some ground truth. 2. The lack of generalisability, where the model parameters can only be dissociated using specific experimental manipulations, and a large number of trials. 3. It is unclear to what extent the assumptions of the model (and its parameterisation) limit the realisability of the proposed computational framework.
The authors achieve their aim of outlining a computational framework that accounts for various sources of suboptimalities and shows some evidence that this model may be useful for making inferences about these suboptimalities given careful experimental manipulation.
The work adds to the movement toward delineating the specific sources of suboptimalities as opposed to capturing 'noise' and 'bias' as overarching variables, and the model may prove useful for other researchers. However, given the model requires special experimental tasks to dissociate the parameters, it is unclear how this model improves upon the traditional approach of designing experiments to dissociate sources of suboptimalities directly.
-
Reviewer #2 (Public Review):
The study makes a useful contribution by showing that the classical binary discrimination task cannot distinguish different sources of suboptimality (perceptual vs. categorical bias; observation noise vs. approximate inference) in contrast to another task that is more complex (cue combination task). The paper provides the computational framework to define and quantify those sources of suboptimality and report the results of a task in which those different sources are disentangled indeed, in both model fitting and qualitative features of the data.
Strengths:
- A very timely question: How to characterize the sources of suboptimality in (human) perceptual decisions?
- The text is very clear and although the content is technical, the main ideas are conveyed in simple terms and figures, and the detail of …Reviewer #2 (Public Review):
The study makes a useful contribution by showing that the classical binary discrimination task cannot distinguish different sources of suboptimality (perceptual vs. categorical bias; observation noise vs. approximate inference) in contrast to another task that is more complex (cue combination task). The paper provides the computational framework to define and quantify those sources of suboptimality and report the results of a task in which those different sources are disentangled indeed, in both model fitting and qualitative features of the data.
Strengths:
- A very timely question: How to characterize the sources of suboptimality in (human) perceptual decisions?
- The text is very clear and although the content is technical, the main ideas are conveyed in simple terms and figures, and the detail of mathematical derivations is restricted to the methods section.
- The design of the cue-combination task is very interesting because the posterior distributions over categories predict no difference between the central and matched conditions in the case of perfect inference, but a difference whenever not too many samples are used in approximate inference, making it possible to disentangle different sources of suboptimality in the task.
- The results from the first experiment are followed up by another experiment that includes manipulation of the stimulus duration, which should change the accuracy of approximate inference (and perceptual noise). The results are compatible with those predictions.
- Effects are characterized by model fitting and model comparison, but different models also make qualitatively different predictions, making it possible to adjudicate between models simply by looking at the data (shape of the psychometric curves in different conditions).Weaknesses:
- There is no parameter recovery analysis based on the generative model in the multi-modal task.
- Several results are not conclusive in most subjects. They are clearly visible only in a few participants and the aggregated data. It is not clear whether this is specific to this dataset (and task design) or whether it is a general conclusion.
- The dataset is reused from a previous study and includes 20 participants. A replication of the result in an independent group of participants would make the result much more robust.
- A replication attempt could use a different task (the current results are based on multi-modal sound localization), which would make the conclusion even more convincing. -
-
-