Embedding Governance in AI Security Culture: From Trust Calibration to Accountable Decisions

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study develops and validates a multidimensional framework of security culture tailored to AI-enabled decision-making in public administration and higher education. Drawing on a mixed-methods design combining qualitative elicitation, psychometric validation, structural equation modeling, and experimental testing, the research identifies five interrelated cultural dimensions: role clarity and accountability in human–AI collaboration, psychological safety to contest AI outputs, awareness of bias and corrective routines, disciplined use of explanations, and an ethical climate. Results show that a strong AI-aware security culture enhances cyber situational awareness, which in turn improves trust calibration between human confidence and algorithmic reliability, ultimately leading to higher decision quality. Experimental evidence demonstrates that explanation-rich interfaces increase accuracy, efficiency, and calibration, while accountability cues strengthen policy adherence and reduce reliance errors. The study also introduces governance key performance indicators (KPIs) that embed cultural and behavioral insights into auditable management frameworks. Findings highlight that secure and effective reliance on AI requires not only technical integration but also human-centered culture, transparent interfaces, ethical norms, and institutionalized governance mechanisms. The proposed framework advances both theory and practice by offering a validated, transferable tool for diagnosing, developing, and auditing security culture in the era of AI.

Article activity feed