“I think I misspoke earlier. My bad!”: Exploring How Generative Artificial Intelligence Tools Exploit Society’s Feeling Rules

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This paper examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa, and Luka’s Replika. The analysis uses Hochschild’s (1983) concept of feeling rules to explore how these tools exploit, reinforce, or violate people’s internalised social guidelines around appropriate and credible emotional expression. We also examine how GenAI developers’ own beliefs and intentions can create potential social harms and conflict with users. Results show that while GenAI tools enact compliance with basic feeling rules – e.g., apologising when an error is noticed – this ability alone may not sustain user interest, particularly once the tools’ inability to generate meaningful, accurate information becomes intolerable.

Article activity feed