Training and Implementation Effect of AI-Enabled Clinical Reasoning Cultivation in General Practice

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background General practice clinical reasoning is the core competency underpinning general practitioners’ provision of comprehensive, continuous, and coordinated primary care services. Traditional training models for this competency are plagued by critical limitations, including a scarcity of typical general practice cases, inadequate discipline-specific training modules, and an uneven distribution of qualified teaching faculty. Artificial intelligence (AI) technology has emerged as a transformative approach to address these gaps and drive the intelligent evolution of clinical reasoning education in general practice. Objective This study aimed to develop and validate a systematic AI-integrated training program tailored to the core characteristics of general practice, and to evaluate its implementation effectiveness and acceptance among clinical teachers and standardized training residents in general practice education. Methods A rigorous Problem-Design-Implementation-Evaluation-Reflection framework was adopted. First, a cross-sectional questionnaire survey was conducted to diagnose the current status of AI application in general practice clinical reasoning training, with 122 general practice clinical teachers and 164 standardized training residents as participants. Based on the survey results, an AI-integrated training program with distinct general practice features was designed, comprising four core modules: AI-enabled general practice case deduction, real-time feedback and quantitative assessment, personalized adaptive learning pathways, and faculty-led general practice discussion and guidance . A parallel-group randomized controlled trial (RCT) was then conducted: 90 eligible general practice standardized training residents were randomly assigned (1:1) to an experimental group (n = 45) and a control group (n = 45). The experimental group received a 12-week intervention with the AI-integrated training program, which included an AI-based clinical reasoning platform, general practice-specific cases, comorbidity management training, family assessment simulation, and community emergency response drills. The control group received conventional general practice clinical reasoning training, including didactic lectures, paper-based case discussions, and on-site community clerkships. Outcome measures included general practice theoretical knowledge assessment, the General Practice Diagnostic Reasoning Scale (G-DRS), chronic disease management reasoning score, referral decision-making ability score, and a self-designed learning satisfaction questionnaire, all administered before and after the 12-week intervention. Results The baseline survey revealed a high prevalence of AI tool usage among participants (93.4% of teachers and 87.2% of residents), with a predominantly positive attitude toward AI-enabled clinical reasoning training in general practice. However, only 33.6% of respondents believed that existing AI tools were well-aligned with the specific needs of general practice reasoning cultivation. Key concerns raised by both teachers and residents included the authenticity of general practice-specific cases (61.5%), the simulation of primary care referral scenarios (54.9%), the risk of rigid clinical reasoning due to over-reliance on AI (79.9%), and the lack of humanistic dimensions in AI-generated feedback (56.6%). After the 12-week intervention, the experimental group achieved significantly higher scores than the control group in all primary and secondary outcome measures: G-DRS (27.9 ± 3.5 vs 22.5 ± 4.1, P  < 0.001), general practice theoretical knowledge assessment (84.7 ± 9.8 vs 79.3 ± 12.5, P  = 0.025), chronic disease management reasoning (86.2 ± 8.3 vs 80.1 ± 9.6, P  = 0.002), and referral decision-making ability (85.7 ± 7.9 vs 79.3 ± 8.8, P  < 0.001). The overall learning satisfaction rate in the experimental group (88.9%) was also significantly higher than that in the control group (71.1%, χ²=4.44, P  = 0.035). Conclusion General practice educators and trainees hold an open and receptive attitude toward AI-enabled clinical reasoning cultivation, with key priorities placed on case authenticity, scenario simulation fidelity, and humanistic feedback integration. The AI-integrated training program developed in this study—incorporating AI-enabled case deduction, real-time feedback, personalized learning pathways, and faculty-led guidance—effectively enhances the clinical reasoning competencies of general practice standardized training residents. This program provides replicable, scalable empirical evidence for the intelligent development of general practice education in China and offers a practical pathway to address the limitations of traditional training models.

Article activity feed