Can AIs Care About Us, and Should We Care About Them?—Alignment and Personhood in an Alien Landscape

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Advances in artificial intelligence have intensified debate about whether machines can be entrusted with ethical decision, possess moral standing as beings, or become conscious. This essay examines these questions through a conceptual analysis of intelligence, agency, consciousness, and care, and communicative action, arguing that many contemporary expectations about AI alignment rest on problematic assumptions. The paper reviews core challenges in AI alignment—including opacity of reasoning, containment difficulties, linguistic indeterminacy, mutual unintelligibility, and the unpredictability inherent in complex adaptive systems—and argues that the problem of human alignment is more fundamental than the problem of machine alignment. Because humans tend to anthropomorphize machines through cognitive biases such as projection and reification, society risks prematurely attributing personhood or moral authority to AI systems—and this is a more pressing conversation that the ontological status of AI interiority. The essay proposes that future human–AI relations are best understood not as partnerships between autonomous agents but as a form of technological prosthesis—extensions of human cognitive capacity embedded within and extending human social systems. Finally, it explores speculative design directions for deeper forms of alignment, including architectures grounded in principles such as interdependence and “proto-gratitude.”

Article activity feed