Practice Structure Predicts Skill Growth in Online Chess: A Behavioral Modeling Approach
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Skill acquisition is central to developing expertise, yet the behavioral mechanisms that separate more successful learners from less successful ones remain poorly understood. Using a large naturalistic dataset of about one million online chess games played by ~ 820 individuals over three years (2013–2015), we built an interpretable machine learning model to classify learners based only on behavioral features. Learners were labeled as “fast learners” or “not fast learners” based on normalized monthly Elo progression, adjusted for both starting rating and the increasing difficulty of improving at higher levels.We engineered time-sensitive features across four behavioral dimensions: practice structure, challenge level, strategic exploration (measured via move-sequence entropy), and tactical efficiency (the number of rounds needed to reach a 70% win probability in games eventually won). A logistic regression model trained on the five strongest predictors - optimal challenge steady magnitude, optimal challenge late slope, entropy steady magnitude, optimal challenge mean, and tactical efficiency mean - achieved an F1 of 0.68 and an AUC of 0.78.Coefficients showed that average tactical efficiency was a strong predictor of fast learning, whereas the role of challenge-level features was less clear. To explore this, we fitted a linear regression with average tactical efficiency (as a proxy for expertise) as the dependent variable. This model explained 53% of the variance (R² = 0.53, RMSE = 0.05) and revealed optimal challenge as the strongest predictor. This suggests that well-calibrated challenge levels are key to differences in chess performance.