Predicting the Machine: Intentionality Framing Reduces the Prediction Gap in Human–AI Cooperation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Cooperation is sustained by shared norms that regulate expectations about how others will behave. As artificial intelligence (AI) systems increasingly participate in social and economic interactions, a critical question emerges: Do the normative expectations that sustain human cooperation extend to artificial players? We propose that failures in human–AI cooperation may reflect a fundamental difficulty to form accurate predictions about artificial behavior that disrupts norm-based coordination. In Study 1 (N = 794), participants in a repeated Public Goods Game were less sensitive to prosocial norms when interacting with AI versus human players. This cooperative deficit was accompanied by a systematic “prediction gap”: participants exhibited larger errors and directional biases when forecasting AI behavior. In Study 2 (N = 314), we observed that framing AI as intentional rather than mechanistic substantially reduced prediction errors and reduced cooperation differences between AI and human players. Analysis of trial-by-trial learning revealed that intentionality framing affected initial expectation calibration but not feedback-driven updating, consistent with a shift in participants’ initial mental model of the AI. These findings suggest that norm-based cooperation in human-AI interaction depends on predictability.