The AIR Framework for Research Transparency: A Critical Analysis of Stage-Specific AI Disclosure in the Context of Accessibility and Research Integrity

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid integration of generative AI into scholarly workflows has exposed critical gaps in institutional accountability systems for research integrity. Existing guidance from bodies such as COPE and ICMJE remains principle-based, tool-specific, or stage-agnostic, leaving research integrity officers, supervisors and editors without an operational framework for evaluating AI disclosure claims. The AIR (AI in Research) framework, developed by Electv Training in 2026, addresses this accountability gap by offering a stage-specific matrix that categorises AI involvement across seven research phases and five engagement bands, from no use (A0) to substantial use (A4), providing the structured vocabulary required for consistent institutional oversight. This article provides a critical analysis of AIR’s theoretical foundations, empirical reliability and accountability limitations, with particular attention to equitable implementation for researchers with disabilities. Drawing on virtue epistemology (Zagzebski, 1996), I argue that transparency should be understood as a constitutive epistemic virtue rather than a procedural compliance requirement, a reframing with direct implications for how institutions design and enforce disclosure policy. I report findings from an inter-rater reliability pilot study (n=15 research integrity officers and doctoral supervisors, nine scenarios, Cohen’s κ=0.72) demonstrating that trained evaluators can apply AIR with substantial agreement, while also revealing systematic boundary ambiguities that undermine consistent institutional adjudication. Five accountability-critical limitations are identified: false precision in classification, inadequate protection for disability-related AI accommodation, stigmatisation of legitimate high-band practices, vulnerability to adversarial self-reporting, and insufficient guidance for edge cases arising in collaborative and multi-tool workflows. Five evidence-informed policy refinements are proposed, including a protected A1-Access sub-band aligned with the UK Equality Act 2010 and the Americans with Disabilities Act, a two-dimensional verification and context-sensitivity rating replacing traffic-light risk coding, and a structured spot-check validation programme for institutional auditing. Together, these refinements position AIR as viable infrastructure for accountable AI disclosure whilst protecting equitable participation in research.

Article activity feed