AI Disclosure Without Accountability: Paper Compliance and the Governance Limits of Transparency in Scientific Research
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper argues that the institutionalisation of artificial intelligence (AI) disclosure in scientific research has resulted in a type of compliance that emphasises symbolic transparency rather than actual responsibility. Although journals and publishers are progressively mandating that authors disclose their use of AI, these policies remain fragmented, non-standardised, and largely unverifiable. Based on an exploratory review of 80 recent academic articles, the report demonstrates that explicit AI disclosure is limited and, when present, is primarily symbolic or narrative rather than verifiable. This work explains the pattern by introducing the concept of paper compliance and establishing the AI Disclosure Integrity Gap (AIDG), which is defined as the discrepancy between reported AI utilisation and the genuine epistemic impact of AI on research results. The analysis reveals that this disparity is systematically generated by the discordance between transparency-oriented governance frameworks and the iterative, opaque, and irreproducible characteristics of AI-assisted knowledge generation. The study develops testable propositions and introduces the AI Use Traceability Framework (AUTF) as a process-oriented approach for AI governance, emphasising traceability and auditability over transparency. Despite institutional, technical, and incentive-based obstacles that hinder full implementation, traceability provides a means to reduce AIDG and enhance accountability in AI-assisted research. The study advances AI governance and research integrity by treating disclosure as a limited mechanism rather than a complete solution, and by highlighting the risk that current practices create a false sense of transparency.