Assurance-Centered Agentic AIOps (ACAA)

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Industrial cyber operations in local-cloud IIoT/OT environments face a persistent systems-level gap: analytical components for detection, triage, and response exist in isolation, yet decision quality degrades at handoff boundaries where competing objectives -- security assurance, operational continuity, governance compliance, and human accountability -- must be reconciled under uncertainty. This thesis addresses that gap through the design, implementation, and proof-of-architecture evaluation of Assurance-Centered Agentic AIOps (ACAA), a layered decision-support architecture that composes statistical inference, machine learning, deep learning, generative AI, and agentic orchestration under deterministic policy controls, explicit uncertainty handling, and human-in-the-loop authority. The architecture is developed across seven progressive project chapters. A reproducible data workflow (P1) establishes ingestion contracts and provenance discipline over OpenRCA telecom telemetry. Statistical inference (P2) extracts governance-relevant structure from heterogeneous cyber observability data. Leakage-aware machine learning (P3) produces vulnerability prioritization priors over NVD/CISA KEV metadata under extreme class imbalance. Deep learning (P4) applies sequence and representation models to LANL cybersecurity telemetry with controlled ablations and guardrails. A generative RCA layer (P5) synthesizes bounded, confidence-labeled narrative hypotheses for analyst scaffolding. Policy-gated multi-agent orchestration (P6) operationalizes deterministic safety boundaries around adaptive reasoning. Finally, an integrative synthesis (P7) introduces parallel contested orchestration, where dual-branch reasoning (assurance versus continuity) is adjudicated by a meta-orchestrator with explicit human-in-the-loop escalation. The central finding is that decision quality in safety-critical cyber AI systems is a property of composed workflows rather than any single model. Contested orchestration produced meaningful branch differentiation, policy-gate invariants held without violation, and governance traceability was maintained end-to-end across all analytical layers.

Article activity feed