Exploring LLM-generated Culture-specific Affective Human-Robot Tactile Interaction
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As large language models (LLMs) become increasingly integrated into robotic systems, their potential to generate socially and culturally appropriate affective touch remains largely unexplored. This study investigates whether LLMs-specifically GPT-3.5, GPT-4, and GPT-4o --can generate culturally adaptive tactile behaviours to convey emotions in human-robot interaction. We produced text based touch descriptions for 12 distinct emotions across three cultural contexts (Chinese, Belgian, and unspecified), and examined their interpretability in both robot-to-human and human-to-robot scenarios. A total of 90 participants (36 Chinese, 36 Belgian, and 18 culturally unspecified) evaluated these LLM-generated tactile behaviours for emotional decoding and perceived appropriateness. Results reveal that: (1) under matched cultural conditions, participants successfully decoded six out of twelve emotions-mainly socially oriented emotions such as love and Ekman emotions such as anger, however, self-focused emotions like pride and embarrassment were more difficult to interpret; (2) tactile behaviours were perceived as more appropriate when directed from human to robot than from robot to human, revealing an asymmetry in social expectations based on interaction roles; (3) behaviours interpreted as aggressive (e.g., anger), overly intimate (e.g., love), or emotionally ambiguous (i.e., not clearly decodable) were significantly more likely to be rated as inappropriate; and (4) cultural mismatches reduced decoding accuracy and increased the likelihood of behaviours being judged as inappropriate.