Is an intelligent machine a moral machine?

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As artificial intelligence (AI) systems become increasingly sophisticated and used in more consequential domains, an unspoken assumption seems to suggest enhanced performance or “intelligence” would entail greater alignment and safety – “morality”. And yet, as argued in the Orthogonality Thesis from AI ethics, an artificial agent becoming more intelligent does not mean it would necessarily become more moral, and therefore increased intelligence alone cannot reduce the danger that AI systems could pose. In this paper we draw on these philosophical debates and explore the psychological foundations of this apparent misconception: do people infer machine morality from machine intelligence? Across nine pre-registered studies (total n = 3895) we investigated how perceptions of AI intelligence shape perceived morality, and how increased AI intelligence shapes both perceptions of trustworthiness and safety in both human and artificial agents. While most work focuses on intelligence and morality as independent and orthogonal facets of person perception and trust, we instead highlight a robust pattern of results in which people do not only perceive intelligence and morality in AI agents, but concerningly infer moral competence and moral motivation from machine intelligence - with consequences for trust and danger. These findings reveal a systematic tendency to infer moral qualities from intelligence, including moral motivation. This may distort public understanding of AI safety and trust, with concerning ethical and epistemic implications.

Article activity feed