Operationally Audit-Ready Dual-Flow Compliance Pipelines for Conformance Matrices: An Ontology-Based Metamodel with GDPR and EU AI Act Instantiation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial intelligence (AI) risk systems deployed in high-stakes decision-support settings are increasingly expected to be operationally audit-ready: they must demonstrate, through verifiable evidence, that applicable governance requirements were implemented, monitored, and maintained during real-world operation. In practice, audit readiness often breaks down not because documentation is absent, but because trace links between normative requirements, operational controls, and both pipeline artefacts and evidence items are fragmented, inconsistent, and costly to verify. To address this gap, this paper establishes a foundation for audit-ready conformance matrices grounded in a dual-flow, layered architecture that couples an upstream, conventional technical pipeline with a downstream compliance pipeline engineered to operationalise governance requirements as explicit controls, evidence specifications, gates, decision records, corrective actions, and accountability hooks. The approach delivers five core artefacts: (i) an ontology-aligned interoperability layer leveraging the Data Privacy Vocabulary (DPV) and the AI Risk Ontology (AIRO); (ii) a conformance-matrix metamodel defining the entities and relations required to represent requirements, controls, artefacts, and evidence; (iii) deterministic mapping rules that bind controls to concrete operational artefacts and run-scoped evidence items; (iv) a case-by-case instantiation workflow producing distinct matrix instances for specific pipelines and contexts; and (v) a multi-regime alignment mechanism that preserves a stable trace structure across regimes. While multi-regime by design, the paper provides a primary instantiation for the General Data Protection Regulation (GDPR) and the European Union Artificial Intelligence Act (EU AI Act). Conceptual validation is provided through competency questions, consistency checks, and an illustrative instantiation over an AI risk pipeline. Overall, the work reframes Compliance-by-Design (CbD) as an operational property supported by reusable, auditable trace structures and run-level evidence bundles, rather than as retroactive reporting.