Physician Evaluations of Large Language Model-Generated Responses to Medical Questions by Region and Years in Practice: A preliminary study

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Large language models (LLMs) have demonstrated a unique ability to generate clinically accurate responses to patient questions, in some cases outperforming physicians. However, little is known about how physician evaluations of such responses vary globally and by years in clinical practice.

Objective

This study builds on prior work by comparing LLM-generated and physician-authored responses to patient questions using two general-purpose LLMs in an international sample of physicians. Participants were asked to rank responses based on accuracy and responsiveness.

Methods

We conducted a survey to assess physician preferences for AI- and human-generated responses to patient questions from the r/AskDocs subreddit. Participants reviewed anonymized answers from ChatGPT-4.0, Meta.AI, and a verified physician, ranking each from best (1) to worst (3). We summarized respondent characteristics descriptively. The primary outcome was the mean rank of each response type. Sensitivity analyses included pairwise win proportions and full rank distribution visualizations.

Results

Fifty-two physicians completed the survey, most of whom were male (78.8%), aged 25–34 (53.8%), based in North America (48.1%) or Africa (25.0%), and over half (53.8%) had less than 5 years of clinical experience. Across all regions, ChatGPT-4.0 and Meta.AI responses were preferred over physician-authored responses, with ChatGPT-4.0 ranked highest in Africa, Asia, Asia Pacific, and North America, and Meta.AI slightly favored in Europe and the Americas. By years in practice, AI-generated responses consistently outperformed physician responses, with ChatGPT-4.0 most preferred among those with less than 15 years of experience and showing the greatest advantage in the 10– 15 year group.

Conclusions

In our global sample, most physicians preferred LLM-generated responses over those written by human contributors. However, preferences varied by geographic region and years in clinical practice, suggesting that both cultural and experiential factors shape physician attitudes toward Artificial Intelligence (AI). These preliminary findings highlight the need for larger, adequately powered studies to assess statistically significant differences and interactions across subgroups. Such research is essential to inform context-specific strategies for integrating AI into patient-facing communication.

Trial Registration

N/A

Article activity feed