Use of Artificial Intelligence in Scientific Publishing: Good Practice Guide for Authors and Institutions
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The emergence of generative AI in academic writing opens opportunities for clarity and efficiency, but introduces risks related to accuracy, bias, authorship, confidentiality, and intellectual property, which require clear guidelines. This study reviews policies and guidelines from 2023 to 2025 from Elsevier, Springer Nature, COPE, ICMJE, and WAME, as well as recent literature, to synthesize the benefits, risks, and practices of using AI in scientific publishing and propose a checklist with declaration templates. The policies agree that AI can be used if it is supervised by the authors, is explicitly declared, and serves a methodological function. Within this framework, language editing, translation, synthesis and organization of drafts, technical or code support, and the generation of experimental stimuli are permitted. However, attributing authorship to AI, citing chatbots as a primary source, incorporating generative images or videos unrelated to the method, uploading unpublished works to insecure services, and using AI without declaration are prohibited or discouraged. Risks include hallucinations, bias, plagiarism, data leaks, and copyright concerns. Therefore, rigorous verification, transparency, and data protection are recommended.