Evaluation of Artificial Intelligence for the Anatomy Content in Medical College

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The incorporation of artificial intelligence into medical pedagogy necessitates a thorough appraisal, especially within fundamental disciplines like anatomy, where accurate understanding is paramount. The efficacy of advanced AI systems, specifically Large Language Models like ChatGPT, in the acquisition and retention of specialized medical knowledge continues to be an active area of research and evaluation. This cross-sectional study was undertaken in August 2025, to evaluate the proficiency of ChatGPT in responding to multiple-choice questions within basic medical sciences, with a particular emphasis on the domain of anatomy. A compilation of 124 meticulously selected multiple choice questions from the mid-term and final examinations administered to first-year medical students, was utilized; Anatomy (28), Histology (23), Microbiology (21), Pathology (33) and Physiology (19). Strict criteria applied to ensure questions were unambiguously framed as single-best-answer items. Paper of each discipline was submitted to ChatGPT, and initial response considered definitive. Performance was scored on a binary scale and analyzed descriptively. Results revealed high accuracy, with ChatGPT answering 96% Anatomy questions correctly, 100% Histology and Physiology, Pathology 97% and Microbiology 95%, achieving an overall accuracy of 98%. The results indicate a substantial capacity for ChatGPT to serve as a valuable pedagogical resource for reinforcing knowledge and facilitating self-evaluation.

Article activity feed