Towards Algorithmic Framing Analysis: Expanding the Scope by Using LLMs

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Framing analysis, an extensively used, multi-disciplinary social science research method, requires substantial manpower and time to code and uncover human-level understanding of story contexts. However, recent advances in deep learning have led to a qualitative jump in algorithm-assisted methods, with large language models (LLMs) like BERT and GPT going beyond surface characteristics to infer the semantic properties of a text. In this study, we explore the application of the LLM BERT-NLI, which leverages bidirectional context and rich embeddings to assist scholars in identifying contextual information in media texts for quantitative framing analysis. More specifically, we investigate the capability of LLMs to identify generic media frames by comparing the results from a zero-shot analysis using BERT-NLI to those from human analysis. We find that the reliability of detecting generic frames varies significantly across different datasets, indicating that even a large LLM like BERT-NLI, trained on millions of texts from diverse sources, cannot be uniformly trusted across different contexts. Nonetheless, LLMs might be employed productively in specific contexts after careful consideration of their agreement with human-generated ratings.

Article activity feed