When the Robot Says “No”: The Impact of Robot Agreement and Disagreement on User's Trust, Perceived Expertise and Compliance

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As social robots engage in increasingly complex interactions with humans, understanding the impact of their conversational behaviors on trust and compliance becomes essential. This study investigates how a robot’s agreement, disagreement, or neutral response to human opinions affects trust, perceived expertise, and compliance behaviors, such as data sharing and monetary donations. A total of 241 participants interacted with the humanoid robot Pepper across conditions that varied by the robot's response type (agreement, disagreement, or neutrality) and the conversational topic (human-centric vs. technical).The results demonstrated that robot disagreement significantly reduced trust and perceived expertise, as well as the willingness to share personal data, compared to agreement or neutral responses. However, no differences were observed in trust, expertise perception, or compliance behaviors between the agreement and neutrality conditions. Furthermore, the conversational topic did not moderate these effects, suggesting that the robot’s reaction type plays a more critical role in shaping user attitudes than the nature of the discussion.These findings highlight the importance of designing interaction strategies for social robots that prioritize agreement or neutrality to foster trust and compliance. They also emphasize the need for further exploration of long-term trust dynamics and adaptive robot behaviors in diverse social contexts.

Article activity feed