Large Language Models for Detecting CONSORT Guideline Compliance in Published Randomized Clinical Trials: A Cross-Sectional Evaluation Study
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background
Peer review processes may inadequately assess compliance with established reporting guidelines such as the Consolidated Standards of Reporting Trials (CONSORT) criteria. Large language models (LLMs) demonstrate potential for systematic manuscript evaluation; however, their accuracy in detecting adherence to CONSORT guidelines in published clinical trials remains unexplored.
Methods
This cross-sectional study evaluated the compliance of 20 randomized controlled trials published between 2015 and 2024 from immunology journals, identified through PubMed, with the CONSORT 2010 guidelines. Three large language models (ChatGPT-4o, Gemini 2.5 Pro, and Claude Sonnet 4) independently assessed compliance across 37 CONSORT subpoints. The primary endpoint was the mean CONSORT compliance percentage. Secondary endpoints included the proportion of articles meeting a 90% compliance threshold and agreement between LLM assessments. Statistical analysis employed repeated measures ANOVA with post-hoc pairwise comparisons (α = 0.05).
Results
Mean CONSORT compliance rates were: ChatGPT-4o 81% (95% CI: 77-85%), Claude Sonnet 4 68% (95% CI: 61-75%), and Gemini 2.5 Pro 55% (95% CI: 48-62%). Overall compliance across all LLMs was 68% (95% CI: 64-72%). Using a 90% compliance threshold as a quality benchmark, ChatGPT-4o identified 25% of papers (5/20), Claude Sonnet 4 identified 5% (1/20), and Gemini 2.5 Pro identified none (0/20) as meeting this standard. Repeated-measures ANOVA demonstrated significant differences in LLM performance (F 2,38 = 40.79, p < 0.001, partial η 2 = 0.682). All pairwise comparisons between models were statistically significant (p ≤ 0.002).
Conclusions
Large language models detected CONSORT compliance deficiencies in published randomized trials, aligning with previously reported rates of 60-70%, which validates their accuracy in identifying persistent reporting quality issues. The substantial variation between LLM assessments indicates the need for standardized evaluation protocols. These findings support the potential utility of LLM-assisted manuscript evaluation to improve adherence to established reporting guidelines.