Strengths and Limitations of Using ChatGPT: A Preliminary Examination of Generative AI in Medical Education

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Introduction

Objective Structured Clinical Examinations (OSCEs) are critical tools in medical education, designed to evaluate clinical competence by engaging students in tasks such as patient history-taking and physical examinations across multiple stations. Examiners assess student performance using structured rubrics to ensure objective evaluation. The COVID-19 pandemic spurred the adoption of technology in OSCEs, introducing virtual formats to maintain social distancing. While these advancements showcased the need for more interactive and realistic simulations, AI-driven tools like ChatGPT offer potential solutions. With its advanced conversational capabilities, ChatGPT can simulate patient interactions and provide real-time feedback, supporting active learning, experiential engagement, and cognitive management. Rooted in established learning theories, ChatGPT represents a promising avenue to enhance OSCEs by improving the evaluation of clinical competence and enriching training experiences.

Method

This pilot study engaged 20 faculty members who design and evaluate OSCE scenarios for medical students. Participants utilized ChatGPT in three simulated clinical cases that mirrored traditional OSCE formats. Following the simulations, feedback was collected via a survey to evaluate ChatGPT’s effectiveness and usability.

Results

The study found that participants appreciated AI as a standardized patient for its responsiveness, clarity, and ability to enhance clinical reasoning while reducing intimidation. However, key challenges included the lack of non-verbal communication, limited empathy, and an inability to perform physical examinations. Technical issues and inconsistent responses also affected the experience. While 20% of participants expressed interest in using AI in future OSCEs, they recommended a hybrid model combining AI with real standardized patients. This approach would leverage AI’s strengths while ensuring essential communication, empathy, and practical skills are effectively developed in clinical education.

Conclusion

Training ChatGPT to simulate diverse patient scenarios within OSCEs represents a significant innovation in medical education. By offering realistic simulations and precise feedback, ChatGPT has the potential to enhance both assessment accuracy and student preparedness for real-world clinical settings.

Article activity feed