Absence of Systematic Effects of Internalizing Psychopathology on Learning Under Uncertainty
Curation statements for this article:-
Curated by eLife
eLife Assessment
This study provides important results with regard to the ongoing debate of the relationship between internalizing psychopathology and learning under uncertainty. The methods and analyses are solid, and the results are backed by a large sample size, yet the study could still benefit from a more detailed discussion about the difference in experimental design and analysis compared to previous studies. If these concerns are addressed, this study would be of interest to researchers in clinical and computational psychiatry for the behavioral markers of psychopathological symptoms.
This article has been Reviewed by the following groups
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
- Evaluated articles (eLife)
Abstract
Abstract
Difficulties in adapting learning to meet the challenges of uncertain and changing environments are widely thought to play a central role in internalizing psychopathology, including anxiety and depression. This view stems from findings linking trait anxiety and transdiagnostic internalizing symptoms to learning impairments in laboratory tasks often used as proxies for real-world behavioral flexibility. These tasks typically require learners to adjust learning rates dynamically in response to uncertainty, for instance, increasing learning from prediction errors in volatile environments. However, prior studies have produced inconsistent and sometimes contradictory findings regarding the nature and extent of learning impairments in populations with internalizing disorders. To address this, we conducted eight experiments (N = 820) using predictive inference and reversal learning tasks, and applied a bi-factor analysis to capture internalizing symptom variance shared across and differentiated between anxiety and depression. While we observed robust evidence for adaptive learning-rate modulation across participants, we found no convincing evidence of a systematic relationship between internalizing symptoms and either learning rates or task performance. These findings challenge prominent claims that learning difficulties are a hallmark feature of internalizing psychopathology and suggest that the relationship between these traits and adaptive behavior under uncertainty may be more subtle than previously thought.
Article activity feed
-
eLife Assessment
This study provides important results with regard to the ongoing debate of the relationship between internalizing psychopathology and learning under uncertainty. The methods and analyses are solid, and the results are backed by a large sample size, yet the study could still benefit from a more detailed discussion about the difference in experimental design and analysis compared to previous studies. If these concerns are addressed, this study would be of interest to researchers in clinical and computational psychiatry for the behavioral markers of psychopathological symptoms.
-
Reviewer #1 (Public review):
The authors conducted a series of experiments using two established decision-making tasks to clarify the relationship between internalizing psychopathology (anxiety and depression) and adaptive learning in uncertain and volatile environments. While prior literature has reported links between internalizing symptoms - particularly trait anxiety - and maladaptive increases in learning rates or impaired adjustment of learning rates, findings have been inconsistent. To address this, the authors designed a comprehensive set of eight experiments that systematically varied task conditions. They also employed a bifactor analysis approach to more precisely capture the variance associated with internalizing symptoms across anxiety and depression. Across these experiments, they found no consistent relationship between …
Reviewer #1 (Public review):
The authors conducted a series of experiments using two established decision-making tasks to clarify the relationship between internalizing psychopathology (anxiety and depression) and adaptive learning in uncertain and volatile environments. While prior literature has reported links between internalizing symptoms - particularly trait anxiety - and maladaptive increases in learning rates or impaired adjustment of learning rates, findings have been inconsistent. To address this, the authors designed a comprehensive set of eight experiments that systematically varied task conditions. They also employed a bifactor analysis approach to more precisely capture the variance associated with internalizing symptoms across anxiety and depression. Across these experiments, they found no consistent relationship between internalizing symptoms and learning rates or task performance, concluding that this purported hallmark feature may be more subtle than previously assumed.
Strengths:
(1) A major strength of the paper lies in its impressive collection of eight experiments, which systematically manipulated task conditions such as outcome type, variability, volatility, and training. These were conducted both online and in laboratory settings. Given that trial conditions can drive or obscure observed effects, this careful, systematic approach enables a robust assessment of behavior. The consistency of findings across online and lab samples further strengthens the conclusions.
(2) The analyses are impressively thorough, combining model-agnostic measures, extensive computational modeling (e.g., Bayesian, Rescorla-Wagner, Volatile Kalman Filter), and assessments of reliability. This rigor contributes meaningfully to broader methodological discussions in computational psychiatry, particularly concerning measurement reliability.
(3) The study also employed two well-established, validated computational tasks: a game-based predictive inference task and a binary probabilistic reversal learning task. This choice ensures comparability with prior work and provides a valuable cross-paradigm perspective for examining learning processes.
(4) I also appreciate the open availability of the analysis code that will contribute substantially to the field using similar tasks.
Weakness:
(1) While the overall sample size (N = 820 across eight experiments) is commendable, the number of participants per experiment is relatively modest, especially in light of the inherent variability in online testing and the typically small effect sizes in correlations with mental health traits (e.g., r = 0.1-0.2). The authors briefly acknowledge that any true effects are likely small; however, the rationale behind the sample sizes selected for each experiment is unclear. This is especially important given that previous studies using the predictive inference task (e.g., Seow & Gillan, 2020, N > 400; Loosen et al., 2024, N > 200) have reported non-significant associations between trait anxiety symptoms and learning rates.
(2) The motivation for focusing on the predictive inference task is also somewhat puzzling, given that no cited study has reported associations between trait anxiety and parameters of this task. While this is mitigated by the inclusion of a probabilistic reversal learning task, which has a stronger track record in detecting such effects, the study misses an opportunity to examine whether individual differences in learning-related measures correlate across the two tasks, which could clarify whether they tap into shared constructs.
(3) The parameterization of the tasks, particularly the use of high standard deviations (SDs) of 20 and 30 for outcome distributions and hazard rates of 0.1 and 0.16, warrants further justification. Are these hazard rates sufficiently distinct? Might the wide SDs reduce sensitivity to volatility changes? Prior studies of the circle version of this predictive inference task (e.g., Vaghi et al., 2019; Seow & Gillan, 2020; Marzuki et al., 2022; Loosen et al., 2024; Hoven et al., 2024) typically used SDs around 12. Indeed, the Supplementary Materials suggest that variability manipulations did not seem to substantially affect learning rates (Figure S5)-calling into question whether the task manipulations achieved their intended cognitive effects.
(4) Relatedly, while the predictive inference task showed good reliability, the reversal learning task exhibited only "poor-to-moderate" reliability in its learning-rate estimates. Given that previous findings linking anxiety to learning rates have often relied on this task, these reliability issues raise concerns about the robustness and generalizability of conclusions drawn from it.
(5) As the authors note, the study relies on a subclinical sample. This limits the generalizability of the findings to individuals with diagnosed disorders. A growing body of research suggests that relationships between cognition and symptomatology can differ meaningfully between general population samples and clinical groups. For example, Hoven et al. (2024) found differing results in the predictive inference task when comparing OCD patients, healthy controls, and high- vs. low-symptom subgroups.
(6) Finally, the operationalization of internalizing symptoms in this study appears to focus on anxiety and depression. However, obsessive-compulsive disorder is also generally considered an internalizing disorder, which presents a gap in the current cited literature of the paper, particularly when there have been numerous studies with the predictive inference task and OCD/compulsivity (e.g., Vaghi et al., 2019; Seow & Gillan, 2020; Marzuki et al., 2022; Loosen et al., 2024; Hoven et al., 2024), rather than trait anxiety per se.
Overall:
Despite the named limitations, the authors have done very impressive work in rigorously examining the relationship between anxiety/internalizing symptoms and learning rates in commonly used decision-making tasks under uncertainty. Their conclusion is well supported by the consistency of their null findings across diverse task conditions, though its generalizability may be limited by some features of the task design and its sample. This study provides strong evidence that will guide future research, whether by shifting the focus of examining dysfunctions of larger effect sizes or by extending investigations to clinical populations.
-
Reviewer #2 (Public review):
Summary:
In this work, the authors recruited a large sample of participants to complete two well-established paradigms: the predictive inference task and the volatile reversal learning task. With this dataset, they not only replicated several classical findings on uncertainty-based learning from previous research but also demonstrated that individual differences in learning behavior are not systematically associated with internalizing psychopathology. These results provide valuable large-scale evidence for this line of research.
Strengths:
(1) Use of two different tasks.
(2) Recruitment of a large sample of participants.
(3) Inclusion of multiple experiments with different conditions, demonstrating strong scientific rigor.
Weaknesses:
Below are questions rather than 'weaknesses':
(1) This study uses a large …
Reviewer #2 (Public review):
Summary:
In this work, the authors recruited a large sample of participants to complete two well-established paradigms: the predictive inference task and the volatile reversal learning task. With this dataset, they not only replicated several classical findings on uncertainty-based learning from previous research but also demonstrated that individual differences in learning behavior are not systematically associated with internalizing psychopathology. These results provide valuable large-scale evidence for this line of research.
Strengths:
(1) Use of two different tasks.
(2) Recruitment of a large sample of participants.
(3) Inclusion of multiple experiments with different conditions, demonstrating strong scientific rigor.
Weaknesses:
Below are questions rather than 'weaknesses':
(1) This study uses a large human sample, which is a clear strength. However, was the study preregistered? It would also be useful to report a power analysis to justify the sample size.
(2) Previous studies have tested two core hypotheses: (a) that internalizing psychopathology is associated with overall higher learning rates, and (b) that it is associated with learning rate adaptation. In the first experiment, the findings seem to disconfirm only the first hypothesis. I found it unclear how, in the predator task, participants were expected to adjust their learning rate to adapt to volatility. Could the authors clarify this point?
(3) According to the Supplementary Information, Model 13 showed the best fit, yet the authors selected Model 12 due to the larger parameter variance in Model 13. What would the results of Model 13 look like? Furthermore, do Models 12 and 13 correspond to the optimal models identified by Gagne et al. (2020)? Please clarify.
(4) In the Discussion, the authors addressed both task reliability and parameter reliability. However, the term reliability seems to be used differently in these two contexts. For example, good parameter recovery indicates strong reliability in one sense, but can we then directly equate this with parameter reliability? It would be helpful to define more precisely what is meant by reliability in each case.
(5) The Discussion also raises the possibility that limited reliability may represent a broader challenge facing the interdisciplinary field of computational psychiatry. What, in the authors' view, are the key future directions for the field to mitigate this issue?
-
-
-