Should generative AI be used in reflexive qualitative research?

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Debates about generative AI have recently produced strong claims that such tools should be excluded from qualitative, and especially reflexive qualitative, research. This research note challenges that conclusion. We argue that “generative AI” is not a single, well-defined method but a heterogeneous set of transformer-based models, many of which are open, local, and configurable for scholarly use. Situating large language models within a longer lineage of computational text analysis—from topic models to word embeddings to transformer-based models—we suggest that decoder-only models offer distinctive affordances that can align productively with qualitative epistemologies, including attention to context and wholism. Drawing on emerging empirical work, we outline how generative models can support rigorous reflexive qualitative analysis, for example, by aiding in surfacing alternative readings, identifying negative cases, and helping researchers probe the boundaries of their interpretations, without replacing human judgment. We then articulate a framework of “technological reflexivity” for guiding the responsible use of generative AI in reflexive qualitative research, emphasizing documentation of prompts, holdout validation, critical attention to model bias, interpretive control, and ethical safeguards around data privacy and provenance. We conclude that generative AI should not be categorically rejected; rather, qualitative researchers should play a central role in defining how these tools are used in ways consistent with interpretive rigor and longstanding commitments of the field.

Article activity feed