An AI-Enabled Zero‑Trust Framework for Security Validation Platforms
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The rapid adoption of autonomous and agentic Artificial Intelligence (AI) systems has intensified the need for rigorous, transparent, and continuously verifiable security controls. This paper presents a unified framework for a Zero‑Trust AI Security Validation Platform that integrates pillar‑wise machine‑learning models, risk‑aware trust scoring, strict policy enforcement, and a tamper‑evident hash‑chain ledger. The framework models AI behavior across five core Zero‑Trust pillars—Identity, Device, Network, Application, and Data—using a hybrid approach that combines supervised Random Forest classifiers with an unsupervised Isolation Forest for anomaly detection. These pillar‑specific risks are fused into a dynamic trust score, which is evaluated against sensitivity‑aware thresholds and non‑negotiable Zero‑Trust policy gates to produce transparent allow/deny decisions. Each decision, along with its full contextual reasoning, is immutably recorded in a blockchain‑like ledger, enabling traceability, auditability, and detection of model‑drift. Although demonstrated using synthetic telemetry, the architecture is directly applicable to enterprise AI environments and critical infrastructure systems, where auditability, continuous validation, and tamper‑evident logging are essential. The results show that the framework achieves high detection accuracy, perfect recall for attack scenarios, and strong alignment with emerging AI governance and Zero‑Trust security standards. This work provides a practical, extensible foundation for validating the safety, integrity, and trustworthiness of AI systems operating in high‑risk environments.