The Cognitive Foundations of Algorithmic Trust Human Reasoning in HighStakes AI Systems
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial intelligence (AI) is increasingly used in high-stakes domains such as healthcare, law, and finance, yet its adoption depends not only on technical performance but on human trust, ethical considerations, and perceived accountability. This mixed-methods study combines two survey datasets (N₁ = 69; N₂ = 44) with fifteen expert interviews to examine the cognitive, affective, and professional determinants of AI trust. Quantitative analyses indicate that prior AI experience, perceived transparency, and emotional comfort predict willingness to trust AI-assisted decision-making, while accountability is consistently attributed to humans. Qualitative analysis identifies seven recurring themes: emotional reasoning, meta-cognition, AI-augmented creativity and innovation, algorithms codifying injustice, radical skepticism, heuristics and cognitive biases, and algorithmic transparency. Experts emphasized that AI functions as a supportive tool rather than a substitute for professional judgment. Together, these findings highlight that trust in AI emerges through interactional, cognitive, and affective processes, underscoring the importance of explainable, ethically aligned, and human-centered AI design.