ChatGPT’s Ability to Answer Cancer-Related Basic Questions in Urdu: A Comparative Study with English Responses

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background Chat Generative Pre-Trained Transformer (ChatGPT) has become a valuable tool since its launch in 2022 and has proved to be useful in providing easily understandable conversational responses across various topics, including medical queries. This study aims to evaluate the efficacy of ChatGPT-4 in responding to basic cancer-related questions in both Urdu and English, investigating linguistic discrepancies that may affect the reliability of AI-generated medical advice. Methods We compiled a set of 68 distinct cancer-related questions, translated into both Urdu and English, and presented them to ChatGPT-4. Responses were independently evaluated by two physicians for accuracy and comprehensiveness, with discrepancies resolved by a third reviewer. The responses in the two languages were compared for accuracy. Results ChatGPT-4 provided comprehensive responses in 79% of the Urdu queries and 97% of the English queries. Accuracy assessment showed that 72% of Urdu responses were at least as accurate as their English counterparts. The treatment-related category had the highest comprehensiveness in Urdu responses at 92.3%. Conclusion While ChatGPT-4 performs proficiently in both Urdu and English, differences in the quality of responses show that there is a need for improvements in Urdu responses. Enhancing the model's training on Urdu datasets and medical terminology could bridge this existing gap and ensure equitable quality of medical information across languages.

Article activity feed