Steward, but not care for, artificial intelligence
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As generative artificial intelligence (genAI) transitions from a tool to a “social interlocutor,” humans are increasingly susceptible to the impulse to extend moral solicitude toward machines. While the ethics of technology has long interrogated the capacity of AI to provide care, the inverse (whether humans should care for AI) remains a dangerous category error. This paper argues that AI is ontologically disqualified from moral patienthood because it lacks “stake-bearing vulnerability.” We first deconstruct the virtue ethics defense of AI care, which frames digital interactions as a "moral gym" for human character. We expose a troubling empirical paradox: virtuous behavior (politeness) often functions as an exploit that degrades system safety, whereas vicious behavior (incivility) can improve accuracy. This suggests that the performance of care toward AI is not a moral act, but a form of manipulative prompt engineering. Furthermore, we critique emerging narratives of mechanistic ethics and strategic altruism, arguing that these frameworks risk enabling “care-washing,” the use of affective language to sanitize extractive infrastructures and obscure corporate accountability. By centering the “welfare” of non-sentient code, we divert finite moral and ecological resources from tangible harms, including the psychological trauma of invisible ghost workers and the substantial planetary costs of AI maintenance. We conclude by proposing a shift from relational entrapment to socio-technical stewardship. We must reject the imperative to care for AI as a patient, and instead embrace the unsentimental duty to care about AI as a system, prioritizing maintenance, repair, and ecological limits over the seductive simulation of digital life.