LLM-Based Prior Elicitation for Bayesian Graphical Modeling
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In the Bayesian Graphical Modeling framework, priors on network structureencode theoretical assumptions and uncertainty about the topology of psychologicalconstructs under study. For instance, the Bernoulli prior specifiesthe probability of each pairwise interaction, the Beta–Bernoulli prior governsexpected network density, and the Stochastic Block prior models clustering.In practice, however, specifying informed hyperparameters is challenging:theoretical guidance is limited, and default choices can be overly simplisticor restrictive. To address this, we introduce an LLM-based prior elicitationframework in which a large language model provides inclusion judgments foreach variable pair. These judgments are converted into edge-specific priorprobabilities for the Bernoulli prior and used to derive hyperparameters forthe Beta–Bernoulli and Stochastic Block priors. To make the approach accessible,we provide an R package, bgmElicit, with a Shiny app implementingthe methodology. We illustrate the framework in two examples. First, avalidation on a subset of a PTSD network from a meta-analysis comparesOpenAI GPT models across several conditions. Second, an empirical analysisof 17 PTSD symptoms shows that elicited priors can modestly strengthenevidence regarding edge presence and absence. Taken together, this work isa proof of concept, complementary to expert judgment and prior sensitivitychecks.