Can AI hold up fists? The vertical-valence metaphor in Chat GPT-4
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The concept of emotional valence’s association with vertical space in cognition (i.e., the vertical-valence metaphor) posits that positive and negative emotions are metaphorically linked to upward and downward directions, respectively. Previous research has demonstrated that participants tend to rate the word “up” more positively than “down.” Furthermore, the vertical-valence metaphor was observed in a cursor-positioning task, where positive words (e.g., “joy”) were placed at higher vertical positions than negative words (e.g., “sadness”), with neutral words (e.g., “surprise”) in intermediate positions. The present study explored whether a similar metaphorical link exists within the artificial intelligence system ChatGPT-4. In Experiment 1, ChatGPT-4 evaluated “up” and “down” using a seven-point scale (1: very unpleasant, 7: very pleasant), with “up” receiving a significantly higher score than “down.” Experiment 2 tasked ChatGPT-4 with positioning the words “joy,” “surprise,” and “sadness” on an XY coordinate plane (X: -10 = left, 10 = right; Y: -10 = down, 10 = up). The results indicated that “joy” was positioned higher than “sadness,” with “surprise” placed between them. In Experiment 3, this word allocation bias persisted even when the coordinate values were reversed (X: -10 = right, 10 = left; Y: -10 = up, 10 = down). These findings closely parallel human cognitive processes related to the vertical-valence metaphor. Therefore, this suggests that the metaphor may have spontaneously developed as a byproduct of training on extensive text data—demonstrating an emergent property of language processing in ChatGPT-4.