Developmentally Aligned AI Modeling of Mathematics Learning Disability: Behavioral Validation of Neural Learning-Rate Constraints
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Developmentally aligned artificial intelligence (AI) emphasizes calibrating AI systems to the distinctive cognitive and neurodevelopmental constraints of children rather than importing assumptions. Biologically grounded "digital twin" models provide another example of this approach. Personalized deep neural network simulations of mathematics learning disability (MLD) indicate that elevated neural gain (hyperexcitability) slows learning while preserving the potential to reach typical accuracy given sufficient training, requiring approximately 2.7 times more training iterations. This model predicts that behavioral dose–response relationships should be conditional: additional instructional hours should matter most for learners at risk for MLD and for outcomes aligned to the practiced skills. These predictions were tested by combining evidence from (a) a reanalysis of an intensive mathematics intervention database ( k = 171 effect sizes, 24 studies), (b) meta-analytic criterion-validity evidence for mathematics curriculum-based measurement ( k = 330), and (c) randomized manipulation of intervention session frequency holding total minutes constant ( N = 101). In Dataset A, dosage–effect size correlations were significant for at-risk samples ( r = .38) but not mixed samples ( r = .05), and were strongest for at-risk samples with skill-aligned outcomes ( r = .52; r = .40 excluding one extreme outlier). Experimental evidence converged: higher session frequency improved a proximal computation measure but not distal standardized outcomes. Together, results support a developmentally aligned learning-rate account of MLD and illustrate how child-calibrated digital twins can generate precise, testable predictions for intervention science.