Can Large Language Models address problem gambling? Expert insights from gambling treatment professionals

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) have transformed information retrieval for humans. People are increasingly turning to general-purpose LLM-based chatbots to find answers to questions across numerous domains, including advice on sensitive topics such as mental health and addiction. In this study, we present the first inquiry into how LLMs respond to prompts related to problem gambling. We used the Problem Gambling Severity Index to develop nine prompts related to different aspects of gambling behavior. These prompts were submitted to two LLMs, GPT-4o (via ChatGPT) and Llama 3.1 405b (via Meta AI), and their responses were evaluated via an online survey distributed to human experts (experienced gambling treatment professionals). Twenty-three experts participated, representing over 17,000 hours of problem gambling treatment experience. They provided their own responses to the prompts and selected their preferred (blinded) LLM response along with contextual feedback on their selections. Llama was slightly preferred over GPT, receiving more votes for 7 out of the 9 prompts. Thematic analysis revealed that experts identified strengths and weaknesses in LLM responses, highlighting issues such as encouragement of continued gambling, overly verbose messaging, and language that could be easily misconstrued. These findings elucidate on the potential for LLMs to support gambling harm intervention efforts but also emphasize the need for better alignment to ensure accuracy, empathy, and actionable guidance in their responses.

Article activity feed