Reasoning Over Pre-training: Evaluating LLM Performance and Augmentation in Women’s Health
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent advances in large language models (LLMs) show promise in clinical applications, but their performance in women’s health remains underexamined 1 . We evaluated LLMs on 2,337 questions from obstetrics and gynaecology, including 1,392 from the Royal College of Obstetricians and Gynaecologists Part 2 examination (MRCOG Part 2) 2 , a UK-based test of advanced clinical decision-making, and 945 from MedQA 3 , a dataset derived from the United States Medical Licensing Examination (USMLE). The best-performing model—OpenAI’s o1-preview 4 enhanced with retrieval-augmented generation (RAG) 5,6 —achieved 72.00% accuracy on MRCOG Part 2 and 92.30% on MedQA, exceeding prior benchmarks by 21.6% 1 . General-purpose reasoning models outperformed domain-specific fine-tuned models such as MED-LM 7 . We also analyse performance by clinical subdomain and discover lower accuracy in areas like fetal medicine and postpartum care. These findings highlight the importance of reasoning capabilities over domain-specific fine-tuning and demonstrate the value of augmentation methods like RAG for improving accuracy and interpretability 8 .