Human Enough to Be Kind

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study develops one of the first instruments to assess human preferences for concrete acts of kindness performed by social robots. In Phase 1 (N = 101; ~1,000 responses), participants completed digital forced-choice surveys in which they compared 65 robot-to-human kindness scenarios drawn from four categories—Emotional Support, Practical Help, Social Awareness, and Family & Child Support—or ranked actions within a single category. Phase 2 (N = 918; ~9,086 responses; U.S.-only) replicated all scenarios and added four extensions: (a) a human-to-human comparison condition for every action, (b) Likert ratings (1–7) of perceived kindness alongside preferences, (c) random assignment to one of four between-subject survey versions (Robot→Human Preference; Robot→Human Likert; Human→Human Preference; Human→Human Likert), and (d) gender demographics. Across phases, results show that choices are not statistically random; clear patterns emerge at both the category and scenario levels. Phase 2 reveals robust actor effects (robots vs. humans), category-level differences, and systematic convergences and divergences between perceived kindness and choice. Together, these findings refine how kindness is evaluated when enacted by robots versus humans and surface design targets for emotionally intelligent, user-centered robots.

Article activity feed