Comparative accuracy of ChatGPT-o1, DeepSeek R1, and Gemini 2.0 in answering general primary care questions

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objectives

To evaluate and compare the accuracy and reliability of large language models (LLMs) ChatGPT-o1, DeepSeek R1, and Gemini 2.0 in answering general primary care medical questions, assessing their reasoning approaches and potential applications in medical education and clinical decision-making.

Design

A cross-sectional study using an automated evaluation process where three large language models (LLMs) answered a standardized set of multiple-choice medical questions.

Setting

From February 1, 2025 to February 15, 2025, the models were subjected to the test questions. For each model, each question was formulated in a new chat session.

Questions were presented in Italian, with no additional instructions. Responses were compared to official test solutions.

Participants

Three LLMs were evaluated: ChatGPT-o1 (OpenAI), DeepSeek R1 (DeepSeek), and Gemini 2.0 flash thinking experimental model (Google). No human subjects or patient data were used.

Intervention

Each model received the same 100 multiple-choice questions and provided a single response per question without follow-up interactions. Scoring was based on correct answers (+1) and incorrect answers (0).

Main Outcome Measures

Accuracy was measured as the percentage of correct responses. Inter-model agreement was assessed through Cohen’s Kappa, and statistical significance was evaluated using McNemar’s test.

Results

ChatGPT-o1 achieved the highest accuracy (98%), followed by Gemini 2.0 (96%) and DeepSeek R1 (95%). Statistical analysis found no significant differences (p > 0.05) between the three models. Cohen’s Kappa indicated low agreement (ChatGPT-o1 vs. DeepSeek R1 = 0.2647; ChatGPT-o1 vs. Gemini 2.0 = 0.315), suggesting variations in reasoning.

Conclusion

LLMs exhibited high accuracy in answering primary care medical questions, highlighting their potential for medical education and clinical decision support in primary care. However, inconsistencies between models suggest that a multi-model or AI-assisted approach is preferable to relying on a single AI system. Future research should explore performance in real clinical cases and different medical specialties.

Article activity feed