Patterns of ChatGPT Use and Attitudes Toward AI in Medical Education: Findings From a Cross-Sectional Survey

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background Large language models (LLMs) such as ChatGPT are increasingly used by medical students, yet empirical evidence on real-world adoption, perceived value, and institutional factors supporting responsible use remains limited. Objective To characterize awareness, frequency and purposes of ChatGPT use among medical students; examine associations with comfort, confidence, time saved, and training stage; and identify student- and institution-level factors linked to use. Methods Cross-sectional, anonymous e-survey of medical students (N=612). Descriptives summarized demographics, use cases, and attitudes. Bivariate tests (χ² with Cramér’s V; Spearman’s ρ with 95% CIs) assessed associations. Logistic regression (outcome: uses ChatGPT yes/no) provided univariable and multivariable adjusted odds ratios (aOR) controlling for age, gender, and years in university. Results Awareness of ChatGPT was near-universal (96.4%); 59.0% reported use at least several times/week. Common use cases were information gathering (64.4%) and clarifying complex concepts (58.0%); exam preparation (34.2%) and creating study aids (28.4%) were less frequent, with communication simulations (17.5%), academic writing (14.2%), and clinical documentation (12.9%) least used. AI-use frequency differed by gender (p=0.015, V=0.12) and by academic year (p=0.008, V=0.13), peaking after 3 years of medical education; it did not differ by prior years studied online. Integration of ChatGPT into routine study correlated with comfort (ρ=0.469, p<0.001), perceived confidence increase (ρ=0.437, p<0.001), and more time saved (ρ=0.226, p<0.001). In multivariable models, higher motivation (OR=1.48 per point, 95% CI 1.25–1.75, p<0.001), awareness of institutional AI policies (OR=2.53, 1.41–4.53, p=0.002), and awareness of support/resources (OR=2.28, 1.28–4.09, p=0.005) were independently associated with being a СhatGPT user; disciplinary consequences, self-rated performance, and perceiving ethical issues were not. Conclusions Medical students commonly and pragmatically integrate ChatGPT as a study assistant, especially for information seeking and explanation, with greater comfort, confidence, and time efficiency among routine users. Institutional levers matter: clear policies and visible support are linked to adoption beyond individual motivation. Findings support enabling, guidance-oriented integration and targeted onboarding for earlier-year students.

Article activity feed