Benchmarking Large Language Models on Persian Surgical Subspecialty Board Examinations: A Comparative Study of ChatGPT-4o, ChatGPT-5, and Gemini 2.5 Flash
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study evaluated the performance of three large language models, including ChatGPT-4o, ChatGPT-5, and Gemini 2.5 Flash, on 532 Persian multiple-choice questions from the 2025 Iranian surgical subspecialty board examinations. Questions spanned five domains: Pediatric, Cardiovascular, Vascular and Endovascular, Thoracic, and Plastic & Reconstructive Surgery. Using standardized prompts, we assessed overall accuracy, variation across subspecialties and question types, and the effect of question length. ChatGPT-5 and Gemini 2.5 Flash achieved higher accuracy (73.3% and 73.9%) than ChatGPT-4o (68.2%). Agreement with the official key was substantial for Gemini 2.5 Flash (κ = 0.651) and ChatGPT-5 (κ = 0.642), and moderate to substantial for ChatGPT-4o (κ = 0.575). Model performance was stable across subspecialties, but all three showed lower accuracy on surgical technique questions compared with clinical scenarios or basic science items. Question length did not affect ChatGPT-5 or Gemini 2.5 Flash, while longer stems reduced ChatGPT-4o’s performance. These findings indicate that newer LLMs provide measurable improvements in surgical question answering, though persistent limitations in procedural reasoning suggest the need for careful integration and further multimodal development.