Power priors for latent variable mediation models under small sample sizes.

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Latent variable models typically require large sample sizes for acceptable efficiency and reliable convergence. Appropriate informative priors are often required for gainfully employing Bayesian analysis with small samples. Power priors are informative priors built on historical data, weighted to account for non-exchangeability with the current sample. Many extant power prior approaches are designed for manifest variable models, and are not easily adapted for latent variable models, e.g., they may require integration over all model parameters. We examined two recent power prior approaches straightforward to adapt to these models, Mahalanobis weight (MW) priors based on Golchi (2020), and univariate priors, based on Finch (2024)'s application of Haddad et al. (2017) and Balcome et al. (2022). We applied these approaches along with diffuse and weakly informative priors to a latent variable mediation model, under various sample sizes and non-exchangeability conditions. We compared their performances in terms of convergence, bias, efficiency, and credible interval coverage when estimating an indirect effect. Diffuse priors and the univariate approach lead to poor convergence. The weakly informative and MW approach both improved convergence and yielded reasonable estimates, but MW performed poorly under some non-exchangeable conditions. We discussed the issues with these approaches and future research directions.

Article activity feed