Epistemic Trust in Generative AI and the Reshaping of Professional Judgment:
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As generative artificial intelligence (GenAI) becomes deeply embedded in university language teaching contexts, EFL teachers are increasingly confronted with a novel epistemic challenge: whether and how to trust AI-generated knowledge in their professional practice. This study examines the psychological and professional consequences of epistemic trust in GenAI (ETGAI) among 420 university EFL teachers in China, proposing a serial mediation model in which professional judgment reshaping (PJR) and moral stress (MS) sequentially transmit the effects of ETGAI on teacher professional development (TPD). Drawing on epistemic trust theory, Conservation of Resources (COR) theory, and professional agency frameworks, a covariance-based structural equation model (CBSEM) was estimated using Mplus 8.3 with 5,000 bootstrap replications. Results supported all seven hypotheses: ETGAI positively predicted PJR (β = .476) and TPD (β = .346), PJR positively predicted MS (β = .490), and MS negatively predicted TPD (β = −.405). The serial indirect effect (ETGAI → PJR → MS → TPD) was significant (β = −.095, 95% BC CI [− .133, − .061]), revealing a moral stress pathway through which epistemic trust in GenAI ultimately undermines professional development when mediated by judgment reshaping. Theoretical and practical implications for AI-integrated EFL teacher education are discussed.