Learning to choose between advisors, algorithmic and human, over repeated interactions.

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

People increasingly consult algorithmic aids repeatedly, yet most evidence on algorithm aversion/appreciation comes from one-shot decisions. Across five preregistered incentive-compatible studies (Prolific; N=1,351), we examine how people learn whom to trust when advisors disagree. Study 1 elicits advice from experienced participants, revealing a bias towards the option that is better most of the time, even when it’s worse in expectation. Studies 2–5 then paired this human advice with algorithms that optimized either expected value or being better most of the time while participants repeatedly chose between undisclosed lotteries. Participants consistently learned to prefer the advisor—human or algorithm—whose advice amplified the bias for options better most of the time, often overturning initial source preferences. This pattern depends on how easy it is to diagnose which advisor is better most of the time. Thus, preference and adoption of advisors and decision aids can be dynamically shaped through repeated interactions.

Article activity feed