Synthetic Policymaking: Using LLM-Generated Personas to Predict Public Views on Existing and Prospective Policies in non-WEIRD Countries
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) are increasingly used to create synthetic research participants. However, the ability of this technique to predict public views on concrete policy proposals, particularly in non-WEIRD (Western, educated, industrialized, rich, and democratic) contexts, remains underexplored. In this research, we examined whether synthetic participants can reproduce human responses to both existing and prospective public policies across multiple domains in three non-WEIRD countries: the United Arab Emirates, the Kingdom of Saudi Arabia, and Qatar. We generated synthetic participants using several LLMs, including GPT-4o, GPT-5, and DeepSeek, systematically varying temperature settings and the richness of prompting information, from generic country-level descriptions to detailed demographic and qualitative profiles. Performance was evaluated using complementary indicators, including directional agreement, aggregate correlations across policies, sensitivity to different policy instruments, and distributional similarity. Across analyses, synthetic participants closely matched human responses in terms of the direction and relative ordering of policy judgments. GPT-4o, particularly at temperatures 0.5 and 1.0, consistently outperformed other models. Notably, even generic prompting achieved strong performance for policy support. However, all models produced narrower response distributions than humans, as observed by previous studies. Overall, our findings suggest that synthetic participants can provide scalable insights on human policy views in underrepresented contexts.