Bias in School-Based Risk Prediction: Challenges for Equitable Practice

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI)-based risk prediction tools, including early warning systems (EWS), have proliferated rapidly in PK-12 schools despite weak evidence of efficacy and mounting documentation of harm to students from marginalized communities. School psychologists are increasingly expected to manage, interpret, and act upon algorithmic outputs with little training in bias detection, and within institutional contexts that amplify bias risk. Drawing on confirmation bias, ecological systems theory, critical race theory, and intersectionality as theoretical frameworks, this paper examines the mechanisms through which bias enters and propagates within AI risk prediction systems. This includes biased training data, black-box opacity, self-reinforcing feedback loops, and inadequate fairness validation. We further discuss the challenges these systems pose for school psychology practice, including gaps in algorithmic literacy, the profession’s lack of representation, and situational factors that heighten implicit bias in decision-making. We offer three evidence-informed recommendations: cultivating critical consciousness about algorithmic authority, identifying and interrupting vulnerable decision points in EWS use, and developing structural competency to reorient practitioners from individual risk profiles toward the systemic conditions that produce them. Together, these recommendations help position school psychologists as agents of equitable practice in an increasingly algorithmic educational environment.

Article activity feed