From inner diversity to outer crowds: How instructions reshape error structure before aggregation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Diversity is widely regarded as a core condition for collective intelligence, yet its effects in estimation tasks are often conditional and mechanistically unclear. Here we offer a descriptive reanalysis and conceptual synthesis of inner-crowd datasets, most drawn from our earlier studies, to clarify how instruction-induced second estimates change crowd error before aggregation. Rather than presenting a new experimental discovery, the paper reinterprets existing data through a common lens. Across datasets, we compute item-wise crowd mean squared error (MSE) for Round 1 (Own) and Round 2 (instruction-induced alternative) estimates, and quantify item-wise change in error (ΔMSE) to describe error redistribution across questions. We focus directly on crowd error, bias summaries, dispersion-related movement, and item-level redistribution; possible changes in dependence are discussed more cautiously because these datasets do not identify them cleanly. Estimating from the perspective of “people in general” improves collective accuracy in one 20-item set but not in a related 8-item set, showing strong item-pool dependence. A repeated-estimation baseline yields near-zero net improvement, indicating that second-try effects alone are insufficient. Dialectical bootstrapping worsens mean crowd accuracy, consistent with bias deterioration. A disagreeing-perspective instruction increases item-wise movement and dispersion but does not improve crowd accuracy on average, illustrating redistribution without net gain. Finally, in a limited pseudo-replication of cognitive-process diversity claims, mixed-instruction crowds in these datasets do not outperform the best single-instruction baseline across group sizes. The contribution is therefore a mechanistic re-interpretation of inner-crowd data: instruction-induced transformations often redistribute error across items, and collective gains depend on when they add useful signal without worsening bias.

Article activity feed