Thinking Fast and Slow in Large Language Models: a Review of the Decision-Making Capabilities of Generative AI Agents
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) are increasingly being used in a wide range of everyday decision-making scenarios, transforming the way people make choices and interact with technology. However, despite their seemingly ‘superhuman’ capabilities, LLMs are not infallible and can exhibit pitfalls in their decision-making abilities if not deployed with caution. This review aims to analyse the decision-making capabilities of LLMs by comparing their abilities to humans through the lens of dual process theory. Guided by this framework, it is clear that LLMs can mimic both human-like System 1 thinking – exhibiting cognitive biases and relying on heuristics to support decision-making processes – and slower System 2 thinking through prompting methods like chain-of-thought reasoning. As LLMs have advanced, they have become more adept at comprehending tasks; however, they can still exhibit biases and make errors, some of which appear similar to human cognitive biases. What remains unclear, however, is the extent to which the processes in AI systems that lead to decision-making biases are truly analogous to those in human cognition, or if they are primarily a byproduct of the human-produced data and algorithms used to train the models. Moreover, LLMs can exhibit their own unique, nonhuman biases, such as hallucinations and overconfidence, that currently limit their application to real-world decision-making applications. Nonetheless, these models hold significant potential to revolutionise the way we make decisions across a diverse range of sectors. Thus, we conclude the review by offering recommendations for future research and practical suggestions on how to leverage LLMs to augment human decision-making.