Bounded Rationality in AI-Assisted Medical Decision-Making

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advances in generative AI models enabled the creation of digital health assistants for patients. However, it remains unclear how patients - especially in the presence of cognitive biases - would utilize them. Drawing on behavioral decision theory (BDT), we analyzed how bounded rational patients use AI health assistants to make healthcare choices. Our findings show that cognitive biases lead patients to underutilize these assistants, limiting their potential to prompt high-risk patients to seek necessary care and to reduce unnecessary clinical visits among low-risk patients. Moreover, we found that bounded rational patients become less sensitive to differences in risk, and their decision to seek clinical care is determined primarily by the cost of access to healthcare rather than by the underlying health risk. These findings highlight the need for developers to design bias-mitigating interfaces and general transparency in the model, and for policymakers to establish safeguards to support effective adoption of these technologies.

Article activity feed