Trust and truth in the post-truth era: cognitive and attitudinal predictors of confidence in AI-generated content

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Trust in artificial intelligence (AI) generated content has emerged as a central concern in the post-truth era, where the boundaries between authenticity and fabrication are increasingly blurred. This study investigates how cognitive and attitudinal factors determine individuals’ trust in AI-generated content in the context of the post-truth era. Using quantitative field design, data were collected from 232 university students, of whom 207 provided complete responses to the main scales. Hierarchical regression analysis was conducted to explore how generative AI literacy, user attitudes, and critical inquiry predict perceived trust in reality. The results reveal that generative AI literacy and frequency of use are positively associated with ethical awareness and manipulation recognition, indicating that cognitive familiarity enhances critical sensitivity. Furthermore, a negative attitude toward AI significantly decreases trust in reality (β = −.45, p < .001), while a positive attitude (β = +.31, p = .027) and critical inquiry (β = −.35, p < .001) exert independent and meaningful effects. The model explains 39% of the variance in trust (R² = .39) and shows no multicollinearity issues (VIF < 5). These findings suggest that trust in AI-generated content is not merely an outcome of exposure or technological proficiency but a product of balanced cognitive literacy and attitudinal orientation. The study contributes to theoretical debates on digital trust by demonstrating how critical intelligence moderates the relationship between generative AI engagement and perceptions of truth in mediated environments.

Article activity feed