Using Large Language Models to Suggest Informative Prior Distributions in Bayesian Statistics

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Selecting prior distributions in Bayesian statistics is a challenging task. Even if knowledge already exists, gathering this information and translating it into informative prior distributions is both resource-demanding and difficult to perform objectively. In this paper, we analyze the idea of using large-language models (LLMs) to suggest suitable prior distributions. The substantial amount of information absorbed by LLMs they have a potential for suggesting knowledge-based and more objective informative priors. We have developed an extensive prompt to not only ask LLMs to suggest suitable prior distributions based on their knowledge but also to verify and reflect on their choices. We evaluated the three popular LLMs Claude Opus, Gemini 2.5 pro, and ChatGPT o4-mini for two different real datasets: an analysis of heart disease risk and an analysis of variables affecting the strength of concrete. For all the variables, the LLMs were capable of suggesting the correct direction for the different associations, e.g., that the risk of heart disease is higher for males than females or that the strength of concrete is reduced with the amount of water added. The LLMs suggested both moderately and weakly informative priors, and the moderate priors were in many cases too confident, resulting in prior distributions with little agreement with the data. The quality of the suggested prior distributions was measured by computing the distance to the distribution of the maximum likelihood estimator (''data distribution") using the Kullback-Leibler divergence. In both experiments, Claude and Gemini provided better prior distributions than ChatGPT. For weakly informative priors, ChatGPT and Gemini defaulted to a mean of 0, which was "unnecessarily vague" given their demonstrated knowledge. In contrast, Claude did not. This is a significant performance difference and a key advantage for Claude's approach. The ability of LLMs to suggest the correct direction for different associations demonstrates a great potential for LLMs as an efficient and objective method to develop informative prior distributions. However, a significant challenge remains in calibrating the width of these priors, as the LLMs demonstrated a tendency towards both overconfidence and underconfidence. A link to our code will be made available in the published version.

Article activity feed