Legal Reasoning and EU AI Act Compliance in LIMEN-AI: Auditability through Interpretable Fuzzy Inference Traces
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The rapid ascent of artificial intelligence in the legal domain necessitates architectures that are both performant and compliant with emerging regulatoryframeworks, most notably the European Union Artificial Intelligence Act. This paper presents LIMEN-AI (Lukasiewicz Interpretable Markov Engine for Neuralized AI), a Small Reasoning Model (SRM) engine based on Neuralized Markov Logic Networks, addressing automated legal reasoning and auditability challenges. Unlike Large Language Models operating on parametric patterns,LIMEN-AI focuses on explicit logical grounding through weighted first-order logic with Lukasiewicz fuzzy semantics, where every inference step generates a structured, human-readable trace. We explicitly map the engine’s technical mechanisms—rule weights, ϵ-regularized operators, and energy-based sampling—to the transparency (Article 13), human oversight (Article 14), and accuracy (Article 15) requirements of the AI Act. Through systematic empirical validation, we demonstrate: (1) zero-shot schema adaptation across legal domains, (2) knowledge base evolution through inductive learning (0 to 18 predicates without catastrophic forgetting), (3) document processing scalability (220 words/second on regulatory text), and (4) human override mechanisms with localized intervention effects. Our analysis proposes LIMEN-AI as a compliance-oriented framework designed to facilitate regulatory requirements in high-risk AI systems.The open-source implementation (v0.2.5 on PyPI) enables community validation. While formal user studies and expanded benchmark evaluation remain future work, this validation demonstrates the feasibility of the neuro-symbolic approach for regulatory compliance in legal AI.