Misleading AI guidance in psychological publishing

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In response to generative AI, SAGE, the Association for Psychological Science (APS), and the American Psychological Association (APA) have introduced policies to regulate its use in their journals. These policies share sensible aims—protecting confidentiality, preventing fabrication and plagiarism, preserving human accountability—but they also embed strong claims about large language models (LLMs). Such claims—often oversimplified and tied to particular deployments—anchor distinctions between “assistive” and “generative” use, set disclosure and citation rules, and justify blanket bans on entering content into AI systems. Using SAGE, APS, and APA guidance as a case study, I show how these claims propagate into policy rules and create recurring mismatches between underlying assumptions and stated requirements across six domains: capability claims, categorization, prohibited uses, confidentiality, disclosure, and attribution. Policies also disagree on basic issues such as whether routine AI‑assisted copyediting must be disclosed. I propose a mechanism‑ and risk‑based policy matrix that organizes AI use by material contribution to the scientific record and confidentiality risk, yielding four editorial stances. Applying this framework to APS and APA policies produces technology-agnostic language that preserves editorial aims while reducing disclosure burden, improving cross-journal coherence, and aligning AI guidance with current research practice.

Article activity feed