Genuine Learning Biases Persist After Accounting for Temporally Decreasing Learning Rates: insight from fitting six datasets

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent claims suggest that learning-rate asymmetries observed in human reinforcement learning may be artefactual, arising from a failure to account for temporally decreasing learning rates in Bayes-optimal agents. Here, we re-analyzed six datasets and found that models incorporating learning biases systematically outperformed both Bayes-derived and decay-based models, and that learning biases persisted even when temporal decay was explicitly included. These results demonstrate that temporally decreasing learning rates cannot account for learning asymmetries, which instead emerge as robust and reproducible features of human reinforcement learning.

Article activity feed