Gaps in AI-Compliant Complementary Governance Frameworks' Suitability (for Low-Capacity Actors), and Structural Asymmetries (in the Compliance Ecosystem)—A Review

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The European Union Artificial Intelligence Act, Regulation (EU) 2024/1689 of the European Parliament and of the Council, dated 13 March 2024, on artificial intelligence, marks the first comprehensive legal framework for artificial intelligence. It establishes a risk-based regulatory architecture that distributes obligations across diverse actors in the AI value chain. While its provisions emphasize proportionality and trustworthiness, significant asymmetries emerge between technologically advanced providers and low-capacity actors such as SMEs, municipalities, and public authorities. This article conducts a structured literature review of regulatory, ethical, and governance sources to examine how compliance responsibilities are operationalized across risk tiers and actor roles. In particular, it analyses the Assessment List for Trustworthy AI (ALTAI) as a soft-law ethics instrument, the EU AI Act as hard law, and comparative frameworks such as ISO/IEC 42001, the NIST AI Risk Management Framework, and the OECD AI Principles. The findings reveal gaps in enforceability, proportionality, and auditability that limit the accessibility of compliance for under-resourced organizations. To address these gaps, the article outlines the need for lightweight compliance frameworks that extend ALTAI’s normative scaffolding into actionable and auditable processes. By mapping role-specific obligations against the structural capacities of actors, the analysis contributes to ongoing debates on operationalizing trustworthy and lawful AI in the European context.

Article activity feed