Selective reproduction of spatial–emotional mappings in large language models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The vertical-valence metaphor refers to the association between emotional valence and vertical space, whereby positive and negative emotions are linked to upward and downward directions, respectively. While this mapping has been robustly observed in human cognition, it remains unclear whether such associations are also reproduced in artificial intelligence systems. The present study investigated the vertical-valence metaphor in ChatGPT. In Experiments 1–3, ChatGPT-4 showed a consistent pattern in which “up” was evaluated more positively than “down,” and emotional words were spatially organized such that “joy” was placed above “sadness,” with “surprise” positioned in between. This pattern persisted even when the coordinate system was reversed. In Experiments 4–6, we compared silicon and human samples and observed a more nuanced pattern. While the vertical-valence mapping was largely preserved across both groups, the intermediate positioning of “surprise” was not consistently reproduced in ChatGPT-5.2. An additional pilot experiment indicates that this reduction in differentiation may be influenced by prompt language, although the relative contributions of model characteristics and language could not be determined. Exploratory analyses revealed that the horizontal-valence metaphor showed less consistent patterns across all the experiments, indicating lower robustness. Taken together, these findings suggest that large language models do not uniformly reproduce human cognitive mappings but instead exhibit a graded pattern of representation. Robust mappings are preserved, whereas intermediate or weaker mappings may be compressed or destabilized.

Article activity feed