Evaluating human perception of value-laden decisions made by a humanoid robot
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study investigated human responses to value-laden decisions made by a humanoid robot, focusing on the values of privacy, freedom, and norms. The primary objective was to develop and validate an implicit method for assessing the importance of these values in a domestic context. Participants interacted with a humanoid robot iCub, which verbally presented to them 30 scenarios, each involving actions that varied in the degree value violation. The results revealed that participants were most likely to tolerate robot behaviours that infringed on freedom and least likely to agree to behaviours that violated social norms, with privacy-related violations falling in between. Additionally, response times increased as the severity of value violation increased, suggesting greater cognitive effort when participants faced more severe ethical dilemmas. The analysis of participants’ open text responses explaining rejection of certain robot behaviours highlighted concerns over autonomy, data protection, and adherence to household norms. The study that involved the humanoid robot was then replicated with a human agent to explore whether these patterns were general, or only specific to robot behaviours. Results mirrored the robot condition: participants again showed the greatest tolerance for violations of freedom and the least for social norm violations, with privacy occupying a middle ground. The most important result from our study is that people tend to prioritise social norms over other principles. Such prioritisation of values needs to be taken into account in the design of value-aware AI systems. These findings contribute to a deeper understanding of how people evaluate value-aware behaviour in social robots and introduce an implicit, ecologically valid method for studying human values beyond traditional questionnaire-based approaches.