Overcoming distortion in multidimensional predictive representation
Curation statements for this article:-
Curated by eLife
eLife Assessment
This manuscript makes a valuable contribution to understanding learning in multidimensional environments with spurious associations, which is critical for understanding learning in the real world. The evidence is based on model simulations and a preregistered human behavioral study, but remains incomplete because of inconclusive empirical results and insufficiencies in the modeling. Moreover, there are open questions about the nature and extent to which the behavioral task induced semantic congruency.
This article has been Reviewed by the following groups
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
- Evaluated articles (eLife)
Abstract
Predicting how our actions will affect future events is essential for effective behavior. However, learning predictive relationships is not trivial in a multidimensional world where numerous causes bring any one event about. Here we examine (1) how these multidimensional dynamics may distort predictive learning, and (2) how inductive biases may mitigate these harmful effects. We developed a theoretical framework for studying this problem using a computational successor features model. Model simulations demonstrate how spurious observations arise in such contexts to compound noise in memory and limit the generalizability of learning. We then provide behavioral evidence in human participants for a semantic inductive bias that constrains these predictive learning dynamics based on the semantic relatedness of causes and outcomes. Together, these results show that prior knowledge can shape multidimensional predictive learning, potentially minimizing severe memory distortions that may arise from complex everyday observations.
Article activity feed
-
eLife Assessment
This manuscript makes a valuable contribution to understanding learning in multidimensional environments with spurious associations, which is critical for understanding learning in the real world. The evidence is based on model simulations and a preregistered human behavioral study, but remains incomplete because of inconclusive empirical results and insufficiencies in the modeling. Moreover, there are open questions about the nature and extent to which the behavioral task induced semantic congruency.
-
Reviewer #1 (Public review):
Summary:
This paper reports model simulations and a human behavioral experiment studying predictive learning in a multidimensional environment. The authors claim that semantic biases help people resolve ambiguity about predictive relationships due to spurious correlations.
Strengths:
(1) The general question addressed by the paper is important.
(2) The paper is clearly written.
(3) Experiments and analyses are rigorously executed.
Weaknesses:
(1) Showing that people can be misled by spurious correlations, and that they can overcome this to some extent by using semantic structure, is not especially surprising to me. Related literature already exists on illusory correlation, illusory causation, superstitious behavior, and inductive biases in causal structure learning. None of this work features in the paper, …
Reviewer #1 (Public review):
Summary:
This paper reports model simulations and a human behavioral experiment studying predictive learning in a multidimensional environment. The authors claim that semantic biases help people resolve ambiguity about predictive relationships due to spurious correlations.
Strengths:
(1) The general question addressed by the paper is important.
(2) The paper is clearly written.
(3) Experiments and analyses are rigorously executed.
Weaknesses:
(1) Showing that people can be misled by spurious correlations, and that they can overcome this to some extent by using semantic structure, is not especially surprising to me. Related literature already exists on illusory correlation, illusory causation, superstitious behavior, and inductive biases in causal structure learning. None of this work features in the paper, which is rather narrowly focused on a particular class of predictive representations, which, in fact, may not be particularly relevant for this experiment. I also feel that the paper is rather long and complex for what is ultimately a simple point based on a single experiment.
(2) Putting myself in the shoes of an experimental subject, I struggled to understand the nature of semantic congruency. I don't understand why the builder and terminal robots should have similar features is considered a natural semantic inductive bias. Humans build things all the time that look different from them, and we build machines that construct artifacts that look different from the machines. I think the fact that the manipulation worked attests to the ability of human subjects to pick up on patterns rather than supporting the idea that this reflects an inductive bias they brought to the experiment.
(3) As the authors note, because the experiment uses only a single transition, it's not clear that it can really test the distinctive aspects of the SR/SF framework, which come into play over longer horizons. So I'm not really sure to what extent this paper is fundamentally about SFs, as it's currently advertised.
(4) One issue with the inductive bias as defined in Equation 15 is that I don't think it will converge to the correct SR matrix. Thus, the bias is not just affecting the learning dynamics, but also the asymptotic value (if there even is one; that's not clear either). As an empirical model, this isn't necessarily wrong, but it does mess with the interpretation of the estimator. We're now talking about a different object from the SR.
(5) Some aspects of the empirical and model-based results only provide weak support for the proposed model. The following null effects don't agree with the predictions of the model:
(a) No effect of condition on reward.
(b) No effect of condition on composition spurious predictiveness.
(c) No effect of condition on the fitted bias parameter. The authors present some additional exploratory analyses that they use to support their claims, but this should be considered weaker support than the results of preregistered analyses.
(6) I appreciate that the authors were transparent about which predictions weren't confirmed. I don't think they're necessarily deal-breakers for the paper's claims. However, these caveats don't show up anywhere in the Discussion.
(7) I also worry that the study might have been underpowered to detect some of these effects. The preregistration doesn't describe any pilot data that could be used to estimate effect sizes, and it doesn't present any power analysis to support the chosen sample sizes, which I think are on the small side for this kind of study.
-
Reviewer #2 (Public review):
Summary:
This work by Prentis and Bakkour examines how predictive memory can become distorted in multidimensional environments and how inductive biases may mitigate these distortions. Using both computational simulations and an original human-robot building task with manipulated semantic congruency, the authors show that spurious observations can amplify noise throughout memory. They hypothesize, and preliminarily support, that humans deploy inductive biases to suppress such spurious information.
Strengths:
(1) The manuscript addresses an interesting and understudied question-specifically, how learning is distorted by spurious observations in high-dimensional settings.
(2) The theoretical modeling and feature-based successor representation analyses are methodologically sound, and simulations illustrate …
Reviewer #2 (Public review):
Summary:
This work by Prentis and Bakkour examines how predictive memory can become distorted in multidimensional environments and how inductive biases may mitigate these distortions. Using both computational simulations and an original human-robot building task with manipulated semantic congruency, the authors show that spurious observations can amplify noise throughout memory. They hypothesize, and preliminarily support, that humans deploy inductive biases to suppress such spurious information.
Strengths:
(1) The manuscript addresses an interesting and understudied question-specifically, how learning is distorted by spurious observations in high-dimensional settings.
(2) The theoretical modeling and feature-based successor representation analyses are methodologically sound, and simulations illustrate expected memory distortions due to multidimensional transitions.
(3) The behavioral experiment introduces a creative robot-building paradigm and manipulates transitions to test the effect of semantic congruency (more so category part congruency as explained below).
Weaknesses:
(1) The semantic manipulation may be more about category congruence (e.g., body part function) than semantic meaning. The robot-building task seems to hinge on categorical/functional relationships rather than semantic abstraction. Strong evidence for semantic learning would require richer, more genuinely semantic manipulations.
(2) The experimental design remains limited in dimensionality and depth. Simulated higher-dimensional or deeper tasks (or empirical follow-up) would strengthen the interpretation and relevance for real-world memory distortion.
(3) The identification of idiosyncratic biases appears to reflect individual variation in categorical mapping rather than semantic processing. The lack of conjunctive learning may simply reflect variability in assumed builder-target mappings, not a principled semantic effect.
Additional Comments:
(1) It is unclear whether this task primarily probes memory or reinforcement learning, since the graded reward feedback in the current design closely aligns with typical reinforcement learning paradigms.
(2) It may be unsurprising that the feature-based successor model fits best given task structure, so broader model comparisons are encouraged.
(3) Simulation-only work on higher dimensionality (lines 514-515) falls short; an empirical follow-up would greatly enhance the claims.
-
Reviewer #3 (Public review):
The article's main question is how humans handle spurious transitions between object features when learning a predictive model for decision-making. The authors conjecture that humans use semantic knowledge about plausible causal relations as an inductive bias to distinguish true from spurious links.
The authors simulate a successor feature (SF) model, demonstrating its susceptibility to suboptimal learning in the presence of spurious transitions caused by co-occurring but independent causal factors. This effect worsens with an increasing number of planning steps and higher co-occurrence rates. In a preregistered study (N=100), they show that humans are also affected by spurious transitions, but perform somewhat better when true transitions occur between features within the same semantic category. However, no …
Reviewer #3 (Public review):
The article's main question is how humans handle spurious transitions between object features when learning a predictive model for decision-making. The authors conjecture that humans use semantic knowledge about plausible causal relations as an inductive bias to distinguish true from spurious links.
The authors simulate a successor feature (SF) model, demonstrating its susceptibility to suboptimal learning in the presence of spurious transitions caused by co-occurring but independent causal factors. This effect worsens with an increasing number of planning steps and higher co-occurrence rates. In a preregistered study (N=100), they show that humans are also affected by spurious transitions, but perform somewhat better when true transitions occur between features within the same semantic category. However, no evidence for the benefits of semantic congruency was found in test trials involving novel configurations, and attempts to model these biases within an SF framework remained inconclusive.
Strengths:
(1) The authors tackle an important question.
(2) Their simulations employ a simple yet powerful SF modeling framework, offering computational insights into the problem.
(3) The empirical study is preregistered, and the authors transparently report both positive and null findings.
(4) The behavioral benefit during learning in the congruent vs incongruent condition is interesting
Weaknesses:
(1) A major issue is that approximately one quarter of participants failed to learn, while another quarter appeared to use conjunctive or configural learning strategies. This raises questions about the appropriateness of the proposed feature-based learning framework for this task. Extensive prior research suggests that learning about multi-attribute objects is unlikely to involve independent feature learners (see, e.g., the classic discussion of configural vs. elemental learning in conditioning: Bush & Mosteller, 1951; Estes, 1950).
(2) A second concern is the lack of explicit acknowledgment and specification of the essential role of the co-occurrence of causal factors. With sufficient training, SF models can develop much stronger representations of reliable vs. spurious transitions, and simple mechanisms like forgetting or decay of weaker transitions would amplify this effect. This should be clarified from the outset, and the occurrence rates used in all tasks and simulations need to be clearly stated.
(3) Another problem is that the modeling approach did not adequately capture participant behavior. While the authors demonstrate that the b parameter influences model behavior in anticipated ways, it remains unclear how a model could account for the observed congruency advantage during learning but not at test.
(4) Finally, the conceptualization of semantic biases is somewhat unclear. As I understand it, participants could rely on knowledge such as "the shape of a building robot's head determines the kind of head it will build," while the type of robot arm would not affect the head shape. However, this assumption seems counterintuitive - isn't it plausible that a versatile arm is needed to build certain types of robot heads?
-
Author response:
We would like to thank the reviewers for their valuable feedback on this research.
Based on the limitations identified across the reviews, we will make four major revisions to this work. We will: (1) run a multi-step experiment to better test the successor representation framework and the predictions made by our model simulations; (2) include a task to explicitly gauge participants’ judgements about the relatedness of the robot features; (3) test additional computational models that may better capture participants’ behavior; and (4) clarify and expand the definition of the inductive bias studied in this work.
(1) The reviews raised the concern that while we frame our results as being about predictive learning within the successor representation framework, we investigated participants’ behavior on a one-step task that is …
Author response:
We would like to thank the reviewers for their valuable feedback on this research.
Based on the limitations identified across the reviews, we will make four major revisions to this work. We will: (1) run a multi-step experiment to better test the successor representation framework and the predictions made by our model simulations; (2) include a task to explicitly gauge participants’ judgements about the relatedness of the robot features; (3) test additional computational models that may better capture participants’ behavior; and (4) clarify and expand the definition of the inductive bias studied in this work.
(1) The reviews raised the concern that while we frame our results as being about predictive learning within the successor representation framework, we investigated participants’ behavior on a one-step task that is not well suited to characterizing this form of predictive representation. Moreover, our simulations make predictions about how learning may differ in relatively more naturalistic environments, yet we do not test human participants in these more complex learning contexts. Finally, we found several null results for effects that were predicted by our simulations. This may be because the benefits of the bias are predicted to be more limited in simpler learning environments, and our experiment may not have been sufficiently powered to detect these smaller effects. To address these limitations, we will run a new experiment with a multi-step causal structure, allowing us to better test the SR framework while more comprehensively investigating the predictions of the simulations and improving our power to detect effects that were null in the one-step experiment.
(2) We argued that the causal-bias parameter may capture idiosyncratic differences in participants’ semantic memory that had an ensuing effect on their learning. However, the reviews identified that we did not explicitly measure participants’ judgements about the relatedness of the robot features to verify that existing conceptual knowledge drove these individual differences. In the new experiment, we will therefore include a task to quantify participants’ individual judgements about the relatedness of the robot features.
(3) The reviews questioned the suitability of the feature-based model for explaining behavior in the task given that only a subset of participants were best fit by the model, and not all of the model’s behavioral predictions were observed in the human subjects experiment. The reviews suggested alternative models could more validly capture behavior. In the revision, we will therefore consider alternative models (e.g., model-based planning, successor features with decay on weak associations).
(4) The reviews requested some clarity around our conceptualization of the inductive bias studied in this work, and questioned whether the task sufficiently captured the richness of semantic knowledge that may be required for a “semantic bias.” We acknowledge that the term semantic bias may not be an accurate descriptor of the inductive bias we measured. Instead, a more general “conceptual bias” term may better capture how any hierarchical conceptual knowledge – semantic or otherwise – may drive the studied bias. We will clarify our terminology in the revision.
In addition to these major revisions, we will address more minor critiques and suggestions raised by individual reviewers.
-
-
-