The Presence and Nature of AI-Use Disclosure Statements in Medical Education Journals: A bibliometric study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

As AI-use becomes more common in research, disclosure policies have emerged to ensure transparency and appropriateness. However, database research in other fields suggests that disclosure may lag behind AI-use. Medical education journal editors report that submitted manuscripts rarely include AI-use disclosures, and they perceive a lack of clarity regarding when and how AI-use should be disclosed. However, we lack objective evidence regarding the incidence and nature of AI-use disclosure in medical education.

Methods

Using bibliometric methods, we searched a database of 24 leading medical education journals for articles published between January and July 2025 (n=2,762 articles). Screening with Covidence software excluded 716 non-empirical and/or non-English language articles. The remainder (n=2,046) were examined for the presence of AI-use disclosures, which were content-analyzed.

Results

2.5% of empirical articles (n=51) had an AI disclosure statement. BMC Medical Education contained the most disclosures (24), followed by Medical Teacher (7) and Journal of Surgical Education (4). Forty-two articles were authored in non-native English-speaking countries, and 69.4% of all first authors had begun publishing in the past decade. Disclosures averaged 43 words and described use superficially: most commonly “editing” and “translation”. Of 18 named tools, ChatGPT was most common. Most disclosures explicitly attested to author responsibility for AI-produced material. Disclosures usually appeared in acknowledgements; those located in methods lacked responsibility attestation. Negative disclosures attesting that AI was not used were also present.

Discussion

AI-use disclosures in medical education journals are rare and appear mostly in work from non-native English-speaking regions of the world. A shared disclosure practice is evident: name the tool and affirm author responsibility, but describe use superficially. This suggests a practice of “safe” disclosure that may be more performative than informative, therefore failing to satisfy the goal of ensuring transparent and ethical AI use in research.

Article activity feed