Identifying Proposers Behavioral Patterns in Human-AI Economic Interactions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding how humans adapt their decisionmakingin economic interactions with artificial intelligence (AI)is essential for building socially attuned AI agents. In this study,we analysed human proposers’ behavior in the Ultimatum Game(UG) using interpretable behavioural features and supervisedmachine learning models to classify strategic proposer types (Fair,Selfish, Learner, Tit-for-Tat). Using data from human–human andhuman–AI interactions in a UG experiment, we uncover contextsensitivepatterns in proposer behaviour. The analyses revealedthat machine learning models—especially Random Forest (RF)and Neural Network (NN)—can reliably identify behavioral strategytypes with high accuracy across both interaction contexts.Classification was slightly more stable in the human condition,but the strongest models generalized well to AI interactions aswell. In contrast, simpler models such as Logistic Regression(LR) and Support Vector Machine (SVM) showed reducedperformance in the AI condition, indicating greater variabilityin human behavior when interacting with artificial agents.These findings suggest that while strategic behavior remainsrecognizable, collaboration with AI partners introduces greatervariability, potentially due to expectancy violations or ambiguousfairness norms. Outcomes in human–AI interactions appear todepend on whether the context is cooperative (e.g., fair or tit-fortatstrategies) or competitive (e.g., exploitative or self-maximisingbehavior). These insights can inform AI design, particularly whenit comes to developing systems that interact more effectively andadaptively with humans.

Article activity feed