MiND: A Minimal Deterministic Middleware for Auditable LLM Interaction
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models are increasingly used in workflows that require inspectability, traceability, and reproducible interaction procedures. However, direct use of monolithic chat interfaces often obscures how system-level instructions are applied, how requests are composed, and how interaction records can be retrieved for later inspection. This paper presents MiND, a minimal deterministic middleware for auditable LLM interaction. MiND does not attempt to make the underlying language model deterministic in a strict output sense; instead, it makes the middleware layer deterministic and inspectable by externalizing cycle configuration, explicitly composing system-level instructions, exposing a lightweight HTTP interface, and recording cycle-level data including input, configuration, output, and metadata. A reference implementation built with FastAPI provides endpoints for health checking, configuration inspection, chat execution, and retrieval of the latest interaction cycle. We argue that this design offers a low-friction approach to auditability, traceability, and reproducibility support in LLM-mediated workflows, while remaining lightweight enough to serve as a minimal executable reference for future modular extensions.