Textbook-Level Medical Knowledge in Large Language Models: A Comparative Evaluation Using the Japanese National Medical Examination
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Study aims and objectives
This study aimed to evaluate the performance of four reasoning-enhanced large language models (LLMs)—GPT-5, Grok-4, Claude Opus 4.1, and Gemini 2.5 Pro—on the Japanese National Medical Examination (JNME).
Methods
We evaluated LLM performance using the 2019 and 2025 JNME (n = 793). Questions were entered into each model with chain-of-thought prompting enabled. Accuracy was assessed overall as well as by question type, content domain, and difficulty. Incorrect responses were qualitatively reviewed by a licensed physician and a medical student.
Results
From highest to lowest, the overall accuracies of the four LLMs were 97.2% for Gemini 2.5 Pro, 96.3% for GPT-5, 96.1% for Claude Opus 4.1, and 95.6% for Grok-4, with no significant pairwise differences observed. All four LLMs reached the threshold generally regarded as sufficient to serve as reliable medical knowledge sources. Question type (e.g., image-based, clinically oriented, and difficult items) still influenced LLM performance, but the performance gaps were much smaller than in earlier generations of LLMs. Notably, Gemini 2.5 Pro consistently achieved the highest performance, including 96.1% on image-based questions and 97.0% on clinical questions. Common error patterns included providing extra response options and misinterpreting laterality when analyzing X-ray images or computed tomography (CT).
Conclusions
Advanced LLMs released in 2025 achieved textbook-level accuracy on the JNME, surpassing the 95% benchmark for reliable knowledge sources. Gemini 2.5 Pro achieved the highest accuracy across all question types and demonstrated the greatest stability, while Grok-4 showed more variability. These findings highlight a milestone in which LLMs have achieved the level necessary to be considered educational resources and decision-support tools.
Statements and Declarations
This work was supported by JSPS KAKENHI Grant Number 24KJ0830.