Promoting Academic Integrity through Generative AI-Use Disclosure: A Single-Case Study from an Argentine University

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This single-case study examines a course-level disclosure mechanism for generative AI assistance—the Declaration of Artificial Intelligence Uses ( Declaración de Usos de Inteligencia Artificial , DUIA)—in a Bachelor’s capstone thesis course ( trabajo final de licenciatura ) at a private Argentine university (Semesters: 2, 2024; 1, 2025). The intervention requires students to declare which predefined AI-supported tasks were used in their final submission. Data come from administrative records, a post-course student survey (n = 38 of 46 completers), and an instructor survey (n = 3 of 3). We use descriptive statistics and cross-tabulations, plus inductive thematic analysis of open responses, integrating strands abductively. Process indicators show high reach and fidelity with low burden; most students reported perceived improvements in work quality and acknowledged gains in transparency. Exploratory associations suggest that a broader, relevant mix of AI-supported tasks co-occurs with higher perceived quality and with interest in AI training. Instructors highlighted value for dialogue on authorship and flagged the need for procedures for AI-assisted feedback. We interpret findings with acceptance/adoption, task–technology fit, and integrity/governance lenses and outline implications for course design and institutional policy.

Article activity feed