Emotions Need Context: A Large Language Model Based Exploration of Emotion Relationships and the Call for a New Model

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study investigates the dimensional representation of human emotions using large language models (LLMs) through a dual-method approach: static embeddings and context-driven prompts. Findings revealed that prompting significantly enhances the quality of emotional representations, as UMAP modeling of prompt-derived results produced distinct and interpretable clusters, validated through clustering and statistical analysis. In contrast, embeddings exhibited flat elbow values, lacking groupability. Moreover, context-driven outputs uniquely correlated with human rating of Arousal in emotion, capturing nuanced emotional dimensions. Both methods showed a lack of separation between Pleasure and Dominance, consistent with some prior research, suggesting the limitations of traditional dimensional models and the need for revised frameworks that better reflect language-expressed emotions. This study highlights the importance of context in emotion modelling, and support a transition from theory-driven to data-driven approaches. The findings proposed context-sensitive emotion models to address the demands of modern AI and affective computing.

Article activity feed