Non-decision time-informed collapsing threshold diffusion model: A joint modeling framework with identifiable time-dependent parameters

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    This study provides a valuable advance in understanding how decision boundaries may change over time during simple choices by introducing a method that uses information about non-decision components to improve parameter estimates. The evidence supporting the main claims is convincing, with clear demonstrations on simulated and real data, although additional model comparison work would further strengthen confidence. The findings will be of interest to researchers studying human decision processes and the methods used to analyse them.

This article has been Reviewed by the following groups

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Abstract

Over the past sixty years, evidence accumulation models have emerged as a dominant framework for explaining the neural and behavioral aspects of the process underlying decision making. These models have also been widely used as a measurement instrument to assess individual differences in latent cognitive constructs underlying decision making. A central assumption of most of these models is that decision makers accumulate noisy evidence until a fixed decision threshold is reached. However, both behavioral and neuroscientific findings, along with theoretical considerations related to optimality, have suggested that the decision threshold varies over time. Although time-dependent threshold models often provide a better account of empirical data, a major challenge associated with these models is the unreliable estimation of their parameters. This limitation has led researchers to emphasize model-fitting comparisons rather than interpreting parameter values or accounting for individual differences in the dynamics of the decision threshold. In this work, we address the reliability issue of parameter estimation in time-dependent threshold diffusion models by proposing a joint modeling approach that links non-decision time to external observations. Parameter recovery simulations demonstrate that informing the diffusion model with trial-level noisy measurements of non-decision time substantially improves the reliability of parameter estimation for time-dependent threshold diffusion models. Additionally, we reanalyzed the experimental data from two perceptual decision-making tasks to illustrate the feasibility of the proposed modeling approach. Non-decision time measurements were extracted from electroencephalography (EEG) recordings using the hidden multivariate pattern method. The cognitive modeling results revealed that, in addition to the reliable parameter estimation, constraining non-decision time improves the fit to behavioral data.

Article activity feed

  1. eLife Assessment

    This study provides a valuable advance in understanding how decision boundaries may change over time during simple choices by introducing a method that uses information about non-decision components to improve parameter estimates. The evidence supporting the main claims is convincing, with clear demonstrations on simulated and real data, although additional model comparison work would further strengthen confidence. The findings will be of interest to researchers studying human decision processes and the methods used to analyse them.

  2. Reviewer #1 (Public review):

    Summary:

    This paper proposes a non-decision time (NDT)-informed approach to estimating time-varying decision thresholds in diffusion models of decision making. The manuscript motivates the method well, outlines the identifiability issues it is intended to address, and evaluates it using simulations and two empirical datasets. The aim is clear, the scope is deliberately focused, and the manuscript is well written. The core idea is interesting, technically grounded, and a meaningful contribution to ongoing work on collapsing thresholds.

    Strengths:

    The manuscript is logically structured and easy to follow. The emphasis on parameter recovery is appropriate and appreciated. The finding that the exponential NDT-informed function produces substantially better recovery than the hyperbolic form is useful, given the importance placed on identifiability earlier in the paper. The threshold visualisations are also helpful for interpreting what the models are doing. Overall, the work offers a well-defined, methodologically oriented contribution that will interest researchers working on time-varying thresholds.

    Weaknesses / Areas for Clarification:

    A few points would benefit from clarification, additional analysis, or revised presentation:

    (1) It would help readers to see a concrete demonstration of the trade-off between NDT and collapsing thresholds, to give a sense of the scale of the identifiability problem motivating the work.

    (2) Before moving to the empirical datasets, the manuscript really needs a simulation-based model-recovery comparison, since all major conclusions of the empirical applications rely on model comparison. One approach might be to simulate from (a) an FT model with across-trial drift variability and (b) one of the CT models, then fit both models to each of the simulated data sets. This would address a longstanding issue: sometimes CT models are preferred even when the estimated collapse in the thresholds is close to zero. A recovery study would confirm that model selection behaves sensibly in the new framework.

    (3) An additional subtle point is that BIC is defined in terms of the maximised log-likelihood of the model for the data being modelled. In the joint model, the parameter estimates maximise the combined likelihood of behavioural and non-decision-time data. This means the behavioural log-likelihood evaluated at the joint MLEs is not the behavioural MLE. If BIC is being computed for the behavioural data only, this breaks the assumptions underlying BIC. The only valid BIC here would be one defined for the joint model using the joint likelihood.

    (4) Table 1 sets up the Study 1 comparisons, but there's no row for the FT model. Similarly, Figures 10 and 13 would be more informative if they included FT predictions. This matters because, in Study 1, the FT model appears to fit aggregate accuracy better than the BIC-preferred collapsing model, currently shown only in Appendix 5. Some discussion of why would strengthen the argument.

    (5) In Figure 7, the degree of decay underestimation is obscured by using a density plot rather than a scatterplot, consistent with the other panels of the same figure. Presenting it the same way would make the mis-recovery more transparent. The accompanying text may also need clarification: when data are generated from an FT model with across-trial drift variability, the NDT-informed model seems to infer FT boundaries essentially. If that's correct, the model must be misfitting the simulated data. This is actually a useful result as it suggests across-trial drift variability in FT models is discriminable from collapsing-threshold models. It would be good to make this explicit.

    (6) Given the large recovery advantage of the exponential NDT-informed function over the hyperbolic one, the authors may want to consider whether the results favour adopting the former more generally. Given these findings, I would consider recommending the exponential NDT-informed model for future use.

    (7) In Study 2 (Figure 13), all models qualitatively miss an interesting empirical pattern: under speed emphasis, errors are faster than corrects, while under accuracy emphasis, errors become slower. The error RT distribution in the speed condition is especially poorly captured. It would be helpful for the authors to comment, as it suggests that something theoretically relevant is missing from all models tested.

    (8) The threshold visualisations extend to 3 seconds, yet both datasets show decisions mostly finishing by ~1.5 seconds. Shortening the x-axis would better reflect the empirical RT distributions and avoid unintentionally overstating the timescale of the empirical decision processes.

  3. Reviewer #2 (Public review):

    Summary:

    The authors use simulations and empirical data fitting in order to demonstrate that informing a decision model on estimates of single-trial non-decision time can guide the model to more reliable parameter estimates, especially when the model has collapsing bounds.

    Strengths:

    The paper is well written and motivated, with clear depth of knowledge in the areas of neurophysiology of decision-making, sequential sampling models, and, in particular, the phenomenon of collapsing decision bounds.

    Two large-scale simulations are run to test parameter recovery, and two empirical datasets are fit and assessed; the fitting procedures themselves are state-of-the-art, and the study makes use of a very new and well-designed ERP decomposition algorithm that provides single-trial estimates of the duration of diffusion; the results provide inferences about the operation of decision bound collapse - all of this is impressive.

    Weaknesses:

    This is an interesting and promising idea, but a very important issue is not clear: it is an intuitive principle that information from an external empirical source can enhance the reliability of parameter estimates for a given model, but how can the overall BIC improve, unless it is in fact a different model? Unfortunately, it is not clear whether and how the model structure itself differs between the NDT-informed and non-NDT-informed cases. Ideally, they are the same actual model, but with one getting extra guidance on where to place the tau and/or sigma parameters from external measurements. The absence of sigma (non-decision time variance) estimates for the non-NDT-informed model, however, suggests it is different in structure, not just in its lack of constraints. If they were the same model, whether they do or do not possess non-decision time variability (which is not currently clear), the only possible reason that the NDT-informed model could achieve better BIC is because the non-NDT-informed model gets lost in the fitting procedure and fails to find the global optimum. If they are in fact different models - for example, if the NDT-informed model is endowed with NDT variability, while the non-NDT-informed model is not - then the fit superiority doesn't necessarily say anything about an NDT-informed reliability boost, but rather just that a model with NDT variability fits better than one without.

    One reason this is unclear is that Footnote 4 says that this study did not allow trial-to-trial variability in nondecision time, but the entire premise of using variable external single-trial estimates of nondecision times (illustrated in Figure 2) assumes there is nondecision time variability and that we have access to its distribution.

    It is good that there is an Intro section to explain how the tradeoff between NDT and collapsing bound parameters renders them difficult to simultaneously identify, but I think it needs more work to make it clear. First of all, it is not impossible to identify both, in the same way as, say, pre- and post-decisional nondecision time components cannot be resolved from behaviour alone - the intro had already talked about how collapsing bounds impact RT distribution shapes in specific ways, and obviously mean (or invariant) NDT can't do that - it can only translate the whole distribution earlier/later on the time axis. This is at odds with the phrasing "one CANNOT estimate these three parameters simultaneously." So it should be first clarified that this tradeoff is not absolute. Second, many readers will wonder if it is simply a matter of characterising the bound collapse time course as beginning at accumulation onset, instead of stimulus offset - does that not sidestep the issue? Third, assuming the above can be explained, and there is a reason to keep the collapse function aligned to stimulus onset, could the tradeoff be illustrated by picking two distinct sets of parameter values for non-decision time, starting threshold, and decay rate, which produce almost identical bound dynamics as a function of RT? It is not going to work for most readers to simply give the formula on line 211 and say "There is a tradeoff." Most readers will need more hand-holding.

    A lognormal distribution is used as line 231 says it "must" produce a right-skew. Why? It is unusual for non-decision time distribution to be asymmetric in diffusion modeling, so this "must" statement must be fully explained and justified. Would I be right in saying that if either fixed or symmetrically distributed nondecision times were assumed, as in the majority of diffusion models, then the non-identifiability problem goes away? If the issue is one faced only by a special class of DDMs with lognormal NDT, this should be stated upfront.

    In the simulation study methods, is the only difference between NDT-informed and non-informed models that the non-NDT-informed must also estimate tau and sigma, whereas the NDT-informed model "knows" these two parameters and so only has the other three to estimate? And is it the exact same data that the two models are fit to, in each of the simulation runs? Why is sigma missing from the uninformed part of Figure 4? If it is nondecision time variability, shouldn't the model at least be aware of the existence of sigma and try to estimate it, in order for this to be a meaningful comparison?

    I am curious to know whether a linear bound collapse suffers from the same identifiability issues with NDT, or was it not considered here because it is so suboptimal next to the hyperbolic/exponential?

    The approach using HMP rests on the assumption that accumulation onset is marked by the peak of a certain neural event, but even if it is highly predictive of accumulation onset, depending on what it reflects, it could come systematically earlier or later than the actual accumulation onset. Could the authors comment on what implications this might have for the approach?

    Figure 7: for this simulation, it would be helpful to know the degree to which you can get away with not equipping the model to capture drift rate variability, when the degree of that d.r. variability actually produces appreciable slow error rates. The approach here is to sample uniformly from ranges of the parameters, but how many of these produce data that can be reasonably recognised as similar to human behaviour on typical perceptual decision tasks? The authors point out that only 5% of fits estimate an appreciable bound collapse but if there are only 10% of the parameter vectors that produce data in a typical RT range with typical error rates etc, and half of these produce an appreciable downturn in accuracy for slower RT, and all of the latter represent that 5%, then that's quite a different story. An easy fix would be to plot estimated decay as a scatter plot against the rate of decline of accuracy from the median RT to the slowest RT, to visualise the degree to which slow errors can be absorbed by the no-dr-var model without falsely estimating steep bound collapse. In general, I'm not so sure of the value of this section, since, in principle, there is no getting around the fact that if what is in truth a drift-variability source of slow errors is fit with a model that can only capture it with a collapsing bound, it will estimate a collapsing bound, or just fail to capture those slow errors.

  4. Reviewer #3 (Public review):

    The current paper addresses an important issue in evidence accumulation models: many modelers implement flat decision boundaries because the collapsing alternatives are hard to reliably estimate. Here, using simulations, the authors demonstrate that parameter recovery can be drastically improved by providing the model with additional data (specifically, an EEG-informed estimate of non-decision time). Moreover, in two empirical datasets, it is shown that those EEG-informed models provide a better fit to the data. The method seems sound and promising and might inform future work on the debate regarding flat vs collapsing choice boundaries. As an evidence-accumulation enthusiast, I am quite excited about this work, although for a broader audience, the immediate applicability of this approach seems limited because it does require EEG data (i.e. limiting widespread use of the method or e.g., answering questions about individual differences that require a very large N).