Hybrid Multi-Agent Systems for Auditable AI Surveying
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Systematically acquiring expert knowledge remains a bottleneck. Large Language Models (LLMs) scale interaction but introduce a governance challenge: inconsistent coverage, topic drift, and user steering. We present MHAESTRO, a hybrid two-phase approach that aims for both scale and accountable control. In Phase 1, a Knowledge-Engineering tool K-Eng compiles expert input into a versioned decision tree. In Phase 2, a Multi-Agent Elicitation tool Elicitor conducts a tightly structured conversational survey that strictly traverses this deterministic policy while using LLMs for phrasing and summarisation. Internal structured control ensures the user is never the last point of control, reducing steering and aligning runs to the mandated structure. We report a formative case study (N=8) that assessed extraction efficacy and user experience. Extraction fidelity was high: 75\% agreed the end-of-session summary accurately captured their input. Ease of use was also high (87.5\%). However, participants reported high perceived intrusiveness, evidencing a fidelity–fluidity trade-off whereby governance mechanisms that enforce coverage can increase interactional strain. We argue this is not merely a design issue but a material accessibility and equity concern, as such strain may disproportionately affect people with high cognitive fatigue or neurodivergence. Our findings show that deterministic control can be combined with LLM generation to deliver rigorous, auditable surveying, but at a human-centred cost that must be actively managed. We outline design and governance implications for accessible, equitable AI-mediated conversational surveying and note the architectural potential for real-time safety monitoring agents.