Does Contractualism Shape Trust and Perceived Agency in Social Robots?

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Social robots are increasingly envisioned as future companions due to the many advantages they offer, such as assisting the elderly and providing emotional support to hospitalized patients. However, to mitigate their potential risks and ensure public trust, they must be designed according to ethical principles aligned with human values. While utilitarian and deontological frameworks have traditionally guided ethical decision-making in artificial agents, we propose a contractualist framework as a more effective alternative. In this study, we investigated how different ethical frameworks, utilitarianism, deontology, and contractualism, affect perceptions of both robots and humans in terms of their moral trustworthiness, reliability, moral agency, complexity, and intentionality. Our results indicate that agents using a contractualist approach are perceived as more reliable, morally trustworthy, and morally agentic than those using a utilitarian framework. Additionally, contractualist agents were viewed as more reliable than deontological ones. These findings suggest that contractualism may offer a more favorable ethical foundation for the design of socially acceptable and trustworthy AI systems.

Article activity feed