Public Trust in AI-Driven Science: The Moderating Role of Self-Reported AI Fluency in the European Union
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The increasing integration of Artificial Intelligence (AI) into scientific discovery poses a fundamental challenge to public trust and established sources of epistemic authority. While traditional formal education has long been considered a predictor of confidence in science, this study investigates whether a citizen’s self-reported AI/digital fluency moderates this relationship, focusing on trust in AI-generated scientific discoveries (qa7_1). Employing a Multilevel Moderated Regression analysis on cross-sectional Eurobarometer data (N = 26,404) from 28 EU countries, we tested the hypothesis that fluency serves as an enabling condition for educational capital to translate into acceptance. The data reveal a strong positive main effect of AI Fluency (β = 0.373, p < .001) on trust. Critically, the main effect of Formal Education was not statistically significant (β ≈ −0.000, p = 0.554,95% CI [−0.001,0.000]), indicating no detectable linear relationship in this model after accounting for AI fluency. A statistically significant but practically minimal positive interaction (β = 0.001, p = 0.038,95% CI [0.00006,0.001]) was observed. The model accounted for 24% of variance in trust (R2 = 0.24, p < .001). These findings suggest that AI Fluency is a substantially stronger predictor of public trust than formal education in the context of AI-driven science. If this correlational pattern reflects causal mechanisms, policy efforts to build trust should prioritize practical digital literacy and technological self-efficacy alongside general education initiatives.