Are chatbots reliable sources of information regarding fluoride in pediatric dentistry?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Aim To evaluate the accuracy and consistency of responses generated by artificial intelligence (AI) chatbots in pediatric dentistry, specifically concerning fluoride usage. Study Design: Descriptive cross-sectional study. Methods The study involved four AI chatbots (ChatGPT, Gemini, Claude, Copilot) and four groups of dental professionals (pediatric dentists, general dentists, pediatric dentistry PhD students, and fifth-year dental students). Participants answered 23 fluoride-related questions based on IAPD, AAPD, EAPD, and ADA guidelines. Chatbots were tested 28 times per question under identical settings. Results Significant differences were observed in the accuracy of chatbot responses across fluoride application categories. Claude and Gemini outperformed ChatGPT and Copilot, particularly in systemic fluoride and fluorosis-related topics. Among professionals, pediatric dentists consistently had the highest accuracy. Statistics: Chi-square and Fisher’s Exact tests were used to assess differences in response accuracy between groups. A p-value < 0.05 was considered statistically significant. Conclusions Claude and Gemini demonstrated greater reliability in fluoride-related questions than ChatGPT and Copilot. However, expert oversight remains crucial in pediatric dental care.

Article activity feed