Goal-directed behaviour is associated with decreased temporal discounting

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reinforcement learning models are broadly classified as model-based algorithms, which use an internal model of the environment to plan and carry out goal-directed behaviour, or model-free algorithms, which lack such a model and behave in a more habitual manner. Humans differ in the extent to which their decision making resembles that of model-based or model-free algorithms; the degree of this resemblance is related to individual differences in compulsivity, and is also argued to be related to the ability to imagine and make decisions about the future. Here, we demonstrate that individuals who exhibit more model-based decision making tend to more frequently choose larger rewards later over smaller rewards sooner in an intertemporal choice task. However, surprisingly, model-free decision making was correlated with the specificity of personal future event narratives in an episodic future thinking task. Participants who provided more specific narratives had slower response times across all tasks, and may therefore have been more affected by time pressure in the reinforcement learning task, resulting in increased model-free decision making. We comment on the role of time pressure in behavioural tasks of the type used here. Future self-continuity, a construct conceptually related to imagining and making decisions about the future, was not found to be related to temporal discounting, model-based or model-free decision making, or episodic future thinking.

Article activity feed