Empathic AI Will Undermine Human Kindness

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human kindness, especially in relationships, revolves around exchanging empathy. Empathic AI—large language models (LLMs) that provide empathic support—will undermine human kindness by stopping people from 1) seeking empathy and 2) giving empathy. Asking humans for empathy is often an imposition and can be expensive (e.g., therapy). Also, the empathy people receive from humans is often unwelcome advice (rather than pure sympathy). Because LLMs are always available to provide consistent and unconditional compassion, “seekers” will rely on LLMs instead of their relationship partners. And when seekers rely on AI, “givers” will happily offload their empathic responsibilities to save themselves from this uncomfortable and emotionally costly work. With seekers and givers both relying on LLMs, the kindness created by exchanging empathy will disappear—unless humanity uses empathic AI only to reinforce human empathy, not replace it. The future of human kindness hinges on how we choose to use LLMs.

Article activity feed