Humans neglect complexity in predictive model selection
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
People often face competing predictive models, such as different weather forecasts or music recommendation systems. How do they evaluate which model is better? Past research suggests that people follow Occam's razor, balancing fit and simplicity, but little is known about whether the same principle describes how people select predictive models. In a series of experiments, we gave participants choices between predictive models, allowing them to see the underlying data used to fit the models. Participants systematically neglected model complexity relative to the statistically optimal benchmark, often preferring models that overfit the data. While they partially compensated for complexity neglect by changing their decision thresholds, this strategy failed to appropriately account for the fact that simpler, misspecified models frequently outperform more complex models under noise and limited data. These findings challenge the view that simplicity is a general cognitive preference. When it comes to prediction, people appear to prefer a good fit.