Breaking the chains of independence: A Bayesian uncertainty model of normative violations in human causal probabilistic reasoning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Empirical research in causal and probabilistic reasoning has revealed systematic deviations from normative principles, often interpreted as biases. We present the Bayesian Uncertainty Model (BUM), a computational approach that explains these deviations through Bayesian Model Averaging (BMA). BUM posits that people do not reason with a single causal model but instead weigh multiple hypotheses, incorporating uncertainty into their probabilistic inferences. Our mathematical derivations demonstrate that this mixture process necessarily disrupts independence constraints, naturally generating well-known causal reasoning normative violations such as Markov violations with generative and independent causes, and weak explaining away effects. Additionally, BUM accounts for negative Markov violations and conservatism by assuming that individuals often consider an uninformative baseline model alongside the provided causal structure. This formulation provides an alternative to existing models that account for normative violations and challenges the notion that deviations from normative principles reflect irrationality or cognitive limitations. Instead, BUM suggests that human causal reasoning reflects an adaptive response to ambiguity, shaping causal probabilistic judgments in systematic and predictable ways.