A Comment on “Learning from Aggregated Opinion”
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Oktar, Lombrozo, and Griffiths (2024) conducted a series of three online experiments to investigate how people form and update their beliefs when informed by aggregated opinions. They compared the performance of three computational models—a Bayesian model and two heuristic models (UPCO: Updating on the Credence’s of Others, and Competence)—in predicting participants' observed belief updates. Their findings revealed that the Bayesian model was the best predictor of belief updates overall, though the behavior of a substantial percentage of participants was better captured by one of the two other models. In this work, I first assess the computational reproducibility of their study, finding that their reported results and figures reproduce almost perfectly. Their analyses closely follow the preregistration, with one exception in Experiment 1 that did not affect the results. Further testing employed non-linear mixed models to address analytical limitations of Study 3. Results revealed that performance measure selection influences which model appears superior: Bayesian models were the best in terms of RMSE and AIC , while UPCO models were the best in terms of MAE. Moreover, substantial variation is observed in how subjects’ weight prior judgments. Finally, I discuss a Bayesian model that exponentially increases the weight of prior judgment as it becomes more polarized, and the individual becomes more confident in it.