Generative AI, Researcher Reflexivity, and the Epistemic Politics of Qualitative Inquiry: Toward a Framework of Critical Techno-Constructivism
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In this article, we examine the implications of generative artificial intelligence (AI), particularly large language models (LLMs), for qualitative inquiry illustrating both the opportunities and constrains of integrating such tools into interpretive work. While LLMs can support transcription, literature review, and data organization, their reliance on statistical pattern recognition might lead to researchers missing out on nuances that qualitative researchers explore and construct when they analyze qualitative stories and data. Our analysis foregrounds two interrelated concerns: how AI as a “ghost collaborator” complicates reflexivity, authorship, and interpretive authority; and how its use can reproduce epistemic bias within knowledge production by hegemonic dominant ideologies and voices. To address these challenges, we propose Critical Techno-Constructivism (CTC), a framework that positions AI as a methodological assistant rather than an autonomous analyst. CTC emphasizes reflexivity, transparency, and human-centered interpretation and calls for rigorous audit trails, ethical safeguards around data privacy, and explicit disclosure of AI use. We operationalized the framework through the design of QualCopilot, a human-centered tool that augments and does not replace researcher judgment and positioning. By advancing CTC, we contribute to ongoing debates about epistemic politics, methodological rigor, and ethical responsibility.