Towards a Third Generation of Natural Language Processing: Enhancing Qualitative Research with Large Language Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper proposes a typology of text analysis methods ranging from surface-level to contextual and cultural interpretation, offering a framework for understanding the evolving role of Natural Language Processing (NLP) in qualitative research. While first- and second-generation NLP methods—such as keyword extraction, topic modeling, and word embeddings—have extended analytical reach, they remain limited in capturing meaning shaped by ideology, discourse, and context. The emergence of Large Language Models (LLMs) represents a third generation of NLP, capable of performing interpretive tasks such as identifying metaphors, framing, and rhetorical strategies. This shift enables new hybrid approaches that integrate computational efficiency with qualitative depth. The paper introduces two frameworks—AI-Augmented Grounded Theory and Theory-Driven AI Analysis—to illustrate how LLMs can support large-scale, context-sensitive interpretation. These “fusion methodologies” challenge the perceived divide between computation and interpretation, pointing toward a new paradigm for qualitative inquiry in the digital age.

Article activity feed