Collective problem decomposition improves the wisdom of deliberative crowds
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Understanding when and why social interaction improves human judgment is a central question in the behavioural sciences. We examine whether collective accuracy improves when groups break complex estimation problems into simpler components and generate approximate intermediate estimates, a reasoning strategy we call Collective Fermi Estimation. Across three experiments analysing more than 1,000 online group deliberations in text chatrooms, we study the causal effects and linguistic signatures of this strategy. In Study 1 (N = 500), spontaneous use of problem decomposition, as rated by human annotators, predicts lower collective estimation error. Study 2 (N = 240) provides causal evidence: groups instructed to apply problem decomposition outperform groups instructed to combine their initial guesses. Study 3 (N = 160) shows that the benefits of the method are larger when applied collectively than individually. We also introduce a fully automated approach to detect Fermi-style reasoning in conversations based on large language models (LLMs). Using five state-of-the-art LLMs, we obtain ratings that strongly correlate with human annotations and predict collective accuracy across all studies. These findings identify problem decomposition as a key mechanism behind the wisdom of deliberative crowds and provide tools to detect and promote it.