Multiphasic value biases in fast-paced decisions

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    Corbett and colleagues developed a novel experimental framework to account for value biases in fast-paced decisions. For this purpose, they developed detailed computational models of how value biases can alter the decision-making process and used EEG data to constrain the estimation of model parameters and their comparison. In contrast to existing accounts which describe value biases using a single bias mechanism, they found that a more complex and dynamic pattern of mechanisms best explains the EEG and behavioral data.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Perceptual decisions are biased toward higher-value options when overall gains can be improved. When stimuli demand immediate reactions, the neurophysiological decision process dynamically evolves through distinct phases of growing anticipation, detection, and discrimination, but how value biases are exerted through these phases remains unknown. Here, by parsing motor preparation dynamics in human electrophysiology, we uncovered a multiphasic pattern of countervailing biases operating in speeded decisions. Anticipatory preparation of higher-value actions began earlier, conferring a ‘starting point’ advantage at stimulus onset, but the delayed preparation of lower-value actions was steeper, conferring a value-opposed buildup-rate bias. This, in turn, was countered by a transient deflection toward the higher-value action evoked by stimulus detection. A neurally-constrained process model featuring anticipatory urgency, biased detection, and accumulation of growing stimulus-discriminating evidence, successfully captured both behavior and motor preparation dynamics. Thus, an intricate interplay of distinct biasing mechanisms serves to prioritise time-constrained perceptual decisions.

Article activity feed

  1. Reviewer #2 (Public Review):

    In this work, Corbett and colleagues investigate how value influences speeded decisions. In a random dot motion task with speed pressure, shortly before motion onset it is indicated which of both choices has a higher value if answered correctly. EEG recordings show a buildup of motor beta in response to the cue (earlier for high value choices, steeper for low value choices) and a dip in LRPs for low value choices in response to stimulus onset. A computational model constructed based on these findings provides a good account of the data. The EEG informed modeling is impressive and deserves merit. The paper is well-written, but rather dense.

    • I am struggling with the idea that cue-evoked motor beta reflects urgency. As it currently reads, this is more taken as a given than actually demonstrated. Could this claim be corroborated by e.g. showing that response deadlines modulate this signal? Related to that, how can we be sure that the pre-stimulus patterns seen in motor beta feed into the decision making process itself? It is not hard to imagine why left and right pointing arrows directly trigger motor activity (i.e. simple priming), but does that also imply that such activity leaks into the decision process?

    • I had a hard time understanding the choice for this specific design. As the authors write they "primarily focused on the value biasing dynamics in common across these challenging regimes" so I wonder whether conditions with different value differences could have been more instructive (e.g., according to the author's hypothesis different levels of value should parametrically affect motor beta, whereas if this reflect a simple priming process value itself should not matter). Alternatively, it should be better explained why these conditions where crucial for the current findings.

    • One of the main selling points of the paper is that we currently lack a model that can explain fast value-based decisions, mostly because the constant drift rate assumption in evidence accumulation models seems invalid. This conjecture is very similar to literature on response conflict, where performance in conflict tasks (such as Stroop, Flanker, etc.) is best modelled using a time-varying drift rate. I wonder to what extent current data reflect the same process, i.e. the value cue "primes" a response, which then has to be suppressed in favor of the correct response. A clear difference is that the value remains relevant here, but could e.g. the motor beta effect just reflect priming?

    • If I understand correctly the model was fit to all data effectively ignoring between-participant differences. It is unclear why this was this done (rather than fitting data separately per participant or fitting the data using a hierarchical model), because it induces substantial variance in the fits caused by between-participant differences.

  2. Reviewer #1 (Public Review):

    The manuscript has several merits. Most remarkably, Corbett and colleagues developed an alternative to describing biases in decision making by shifting the starting point of evidence accumulation. Instead, they included a linearly increasing urgency buildup rate that was biased by a value cue presented before task onset. Hence, the subsequent evidence accumulation process (labeled the "cumulative bias plus evidence function", p. 5) was affected by this bias in addition to gradually-accumulated stimulus evidence. To allow the estimation of these new model parameters, starting points and urgency buildup rates were constrained to equal the amplitude and temporal slope of the corresponding beta signal captured in simultaneous EEG recordings.

    They tested a set of alternative model implementations and found that the bias in stimulus-evidence accumulation was best represented by a concentrated burst of value-biased activity that mirrored voltage changes in the LRP. In comparison, a model with sustained value-biased activity provided an inferior account of the data. Moreover, the authors found that a model gradually increasing evidence and noise provided a better account of the data than a stationary evidence accumulation function. This systematic comparison of alternative model implementations is a great highlight of the paper, because it allows to narrow down on the neurocognitive processes underlying biased decision making.

    What limits the generalizability of the authors' results is the sample size and composition. With only 18 participants (one of which was a co-author of this manuscript), the robustness of the authors' modeling results remains an open question. Although 18 participants may provide sufficient power to test a simple main effect in a within-subject design, this does not speak to the issue of the reliability and generalizability of modeling results. Moreover, it is important to note that a sample of 18 participants gives only a power of about 50 % to detect a medium-sized effect with α = .05. Nevertheless, I believe that the generalizability of modeling results is a larger issue than the statistical power. It would have been interesting to assess if the best-fitting model identified in Table 2 provides the best account of the data for all participants or only for a certain percentage of the sample.

  3. Evaluation Summary:

    Corbett and colleagues developed a novel experimental framework to account for value biases in fast-paced decisions. For this purpose, they developed detailed computational models of how value biases can alter the decision-making process and used EEG data to constrain the estimation of model parameters and their comparison. In contrast to existing accounts which describe value biases using a single bias mechanism, they found that a more complex and dynamic pattern of mechanisms best explains the EEG and behavioral data.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)