Strategy-dependent effects of working-memory limitations on human perceptual decision-making

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    This paper employs sophisticated modeling of human behavior in well-controlled tasks to study how limitations of working memory constrain decision-making. Because both are key cognitive processes, that have so far largely been studied in isolation, the paper should be of broad interest to neuroscientists and psychologists. The observed working memory limitations support and extend previous findings, but some of the most interesting claims need additional support.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Deliberative decisions based on an accumulation of evidence over time depend on working memory, and working memory has limitations, but how these limitations affect deliberative decision-making is not understood. We used human psychophysics to assess the impact of working-memory limitations on the fidelity of a continuous decision variable. Participants decided the average location of multiple visual targets. This computed, continuous decision variable degraded with time and capacity in a manner that depended critically on the strategy used to form the decision variable. This dependence reflected whether the decision variable was computed either: 1) immediately upon observing the evidence, and thus stored as a single value in memory; or 2) at the time of the report, and thus stored as multiple values in memory. These results provide important constraints on how the brain computes and maintains temporally dynamic decision variables.

Article activity feed

  1. Evaluation Summary:

    This paper employs sophisticated modeling of human behavior in well-controlled tasks to study how limitations of working memory constrain decision-making. Because both are key cognitive processes, that have so far largely been studied in isolation, the paper should be of broad interest to neuroscientists and psychologists. The observed working memory limitations support and extend previous findings, but some of the most interesting claims need additional support.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

  2. Reviewer #1 (Public Review):

    This paper by Schapiro and colleagues interrogates psychophysical data from human participants performing different variants of a task that requires judgments to be made about the spatial locations of visual stimuli. Some task variants necessitate the maintenance of one or more stimulus locations over a delay period ('Perceived' task blocks), thus requiring working memory; other variants necessitate computing the average location of several stimuli that in some cases are separated in time ('Computed' blocks), thus additionally requiring the integration of multiple pieces of stimulus information that is generally characteristic of deliberative decision-making. Through manipulation of different task factors (stimulus set size; delay duration) and application of sophisticated computational models that model memory and decision representations as diffusing particles driven by multiple, sometimes shared sources of random noise, the authors illuminate how well-known capacity and temporal limitations of working memory can place strong constraints on the accuracy of decision-making. Moreover, they demonstrate how the nature of this constraint depends on the specific strategy that is employed to solve the decision-making problem, which varies across individuals and task variants.
    The paper breaks promising new ground in explicitly connecting computations for working memory and decision-making - two essential cognitive functions that have each been intensively studied, but often independently from one another. As such, the paper should be of broad appeal. One of the main strengths of the paper is that it establishes a principled framework within which shared and unique sources of variability in working memory and decision-making behaviour can be modelled and decomposed, which will surely be of benefit to researchers in these areas. The insight that individuals appear to employ different strategies ("average-then-diffuse" [AtD] vs. "diffuse-then-average" [DtA]) on the decision-making task variants, potentially in an adaptive, context-dependent fashion, is also very interesting and warrants further investigation.
    These highly positive points notwithstanding, I also feel that the paper in its current form has some limitations. These centre around clarity of presentation; appropriateness of some modelling choices; and possible missed opportunities to clarify both the consequences of strategy choice, and the apparent differences between working memory and decision computations:

    Clarity of presentation:
    I regrettably found the Results section to be a difficult read, and several key aspects of the approach and findings were only clarified for me following a careful reading of the Methods section. I mention this as the first point of concern in my review because, as described above, I feel there is much of interest in the paper, but I am concerned that the at times impenetrable nature of the Results might turn off readers and limit the paper's impact on the field. Most prominently, there is insufficient explanation of key model predictions that may be counterintuitive for many readers; a lack of clarity around what individual model parameters capture; and confusing elements to how the model fits are presented.

    Appropriateness of modelling choices:
    If I understand correctly, the A parameter (governing the relationship between the diffusion constant for a single point and the constants for multiple points) is estimated differently in the AtD and DtA models: in AtD, it's estimated using only data from Perceived blocks with set size > 1, and it plays no role in the AtD process (only, instead, in the memory maintenance process during the delay period of Perceived trials); whereas in DtA, it's estimate using data from both the same Perceived blocks, *and* the Compute blocks at equivalent set sizes. This raises two concerns. The first is that, because the A parameters in each model are effectively fit to different data, any comparison of the parameter estimates (which is invited by placing them in same table and by some of the discussion in the text [p.9]) needs to be carefully qualified in the associated text. The second concern is to my mind more serious: that there is an implicit assumption in the fitting of the DtA model that the A parameter is fixed across Perceived and Computed blocks. There is to my mind a strong argument against making this assumption: that Perceived trials with set size > 1 require working memory maintenance of a *conjunction* of stimulus features (location and colour), whereas the latter require maintenance (assuming the DtA process is employed) of only a single feature per stimulus (location); thus, it can reasonably be expected that the effect of load may be more severe in Perceived than Computed blocks. From what I can make out, this possibility is not allowed for in the presented model fits.

    Implications of model fits:
    I feel that more could be done to clarify the nature and implications of two important features of the presented results. The first of these is the strategy used by different individuals in different task contexts. All else being equal presumably the DtA strategy should be preferred to the AtD strategy, because it incurs less of a time-dependent increase in error. There is no investigation, though, of whether indeed the participants who adopt DtA are indeed overall better performers on Computed blocks; or alternatively whether there may be some tradeoff evident in the data. One can imagine, for example, that DtA is the more demanding and energetically costly strategy to adopt (since it requires active maintenance of N points rather than a single point over time), and may only be employed to counteract other sources of error, for example by participants for whom non-time-dependent noise sources are especially large. There is minimal speculation, and from what I can tell no comprehensive effort to address such questions in the manuscript.
    The second feature of the results that I feel is somewhat neglected is an exploration of the *differences* between the working memory and decision-making components of the behaviour on Computed trials. My impression is that in the modelling framework, decision-specific computation is captured entirely by additional non-dependent noise sources (eta_MN) but these are given little attention in the manuscript.

  3. Reviewer #2 (Public Review):

    This paper uses human psychophysics to study the limitations of memorizing several objects at different spatial locations (working memory) and computing their average location. The authors convincingly show that both perceived and computed variables are subject to working memory constraints (limited capacity and decreasing precision with time). The observed working memory limitations are consistent with classical findings. Mathematical models based on diffusive bump attractor dynamics are presented that allow, in principle, to distinguish different task strategies for computing the average location. The paper is very well written, and the different model predictions are exposed clearly. However, the experimental evidence for the two different strategies is rather limited and additional analyses are needed to confirm that there are indeed two strategies. Moreover, the paper does not convincingly address the question of temporal evidence integration, a hallmark - and in my view - main feature of perceptual decision making.

    Strengths

    The investigation of how a latent variable (the computed average) degrades over time is a novel and interesting experimental paradigm to link findings from working memory studies (that usually do not require a computation or evidence accumulation) and perceptual decision making experiments (that often probe categorical choices).

    The mathematical models are well-grounded in mechanistic models of working memory and make interesting predictions.

    Weaknesses

    The paper studies the integration of evidence over time only in a very limited setting (either two stimuli presented sequentially or 4 stimuli simultaneously and 1 sequential). As the authors point out evidence integration means to update a "decision variable" over time (line 287: "evidence accumulation over time (i.e., in which a new piece of evidence is used to update a computed quantity"). However, for almost all subjects the DtA model, in which the decision variable is computed at the end of the trial, fits the data as well as the AtD, the model that actually relies on updating a computed quantity (Fig 10, delta LL in the range of -3 to +3). Based on this, I don't think it is clear whether the sequential condition experiment tests evidence integration per se or rather the degradation of working memory with sequentially presented cues.

    The authors claim that some subjects follow the AtD strategy and others the DtA strategy but experimental evidence for this claim seems very weak. I take Fig 10 as an example (Fig. 6 is similar). The authors conclude from the data in Fig 10 that on the population level there is no significant difference between the models. On an individual subject level, the delta_LL values are small (for most subjects | delta_LL | < 3) which I would interpret as either model fitting the data equally well. I think in order to claim that there are indeed two different strategies in place, it needs to be shown that the data can only be explained by heterogeneous strategies (for example following a methodology as in Stephan et al Neurimage 2009 and Rigoux et al Neuroimage 2014). The alternative explanation is that the data does not allow to distinguish the two different strategies. In the text, it seems ambiguous now what exactly the finding is (compare line 244 "(...) participants had roughly equal tendencies to use either of the two strategies" implying that we can distinguish which strategy individual subjects are following, vs. line 257 "(...) neither of which was more likely than the other for a given participant" which implies the opposite).