Assisting, Not Automating: Large Language Models in Qualitative and Mixed-Methods Research
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Qualitative and mixed-methods research has long balanced interpretive depth against empirical scope. The rapid expansion of digital and textual data has intensified this tension, creating methodological pressures that challenge established qualitative workflows. Large language models (LLMs) have recently been proposed as a means of addressing these pressures by enabling large-scale textual processing and synthesis. However, their apparent fluency risks conflating linguistic coherence with interpretation, thereby obscuring judgment, accountability, and scholarly responsibility.This paper argues that the methodological significance of LLMs for qualitative and mixed-methods research lies not in automation but in carefully governed computational assistance. Drawing on qualitative traditions centered on meaning, interpretation, and theory-building, the paper situates LLMs as infrastructural tools whose use must be governed by explicit research design, validation, and reflexive oversight. Particular attention is given to forms of failure that exceed factual error, including bibliographic distortion and loss of structural integrity in scholarly representation.The analysis develops a framework organized around research design patterns, human judgment under computational assistance, and three interrelated dimensions of validity: interpretive, integrative, and bibliographic. Ethical implications concerning consent, representation, bias, and responsibility at scale are also examined. The paper concludes that responsible integration of LLMs requires strengthening—not relaxing—the methodological commitments that define qualitative and mixed-methods inquiry. Carefully governed computational assistance, rather than automation, provides the appropriate orientation for incorporating computational assistance while preserving interpretive rigor.