When Crowds Fail: Predicting Failures in Collective Wisdom through Discourse Cues

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In today's increasingly interconnected world, accurate forecasting is critical. Although individual predictions are often distorted by cognitive biases, aggregating judgments—the wisdom of crowds—can improve accuracy. Group discussion may help through information sharing or hurt by introducing social pressures that reduce independence and diversity. Using data from a community forecasting site, we built an interpretable predictive model to examine how structural and linguistic-psychological features of discourse affect crowd accuracy. A model with 14 variables explained 28.6% of the variance in group prediction accuracy on a held-out test set. Higher comments-to-predictions ratios and more informal language (e.g., profanity, religious references, speech disfluencies) were associated with larger crowd errors. Emotional language also mattered: exclamation marks predicted increased error, whereas anxiety-related language and detachment showed modest associations with reduced error. Overall, discourse markers may provide early warnings of crowd prediction failures and inform interventions to calibrate collective intelligence and improve forecasting performance.

Article activity feed