AI Meeting Assistants: Summarization, Autonomous Participation, and Governance
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI-driven meeting assistants have evolved from passive transcription tools into active participants that can join meetings, take notes, and act as proxies for absent users. Commercial platforms now offer AI note-takers (e.g., Teams Copilot, Zoom AI Companion, Google Meet Gemini) that automatically transcribe and summarize meetings. Research systems leverage large language models (LLMs) to generate abstractive summaries from meeting transcripts, with recent studies demonstrating that both proprietary and open-source models can produce high-quality meeting summaries. However, this technical progress has outpaced governance: issues of consent, identity, data protection, and accountability remain under-addressed. In this review, we synthesize recent literature across NLP, HCI, and industry sources to classify AI meeting agents, describe a processing pipeline, and examine legal constraints (GDPR, EU AI Act). We propose a maturity model (L0--L5) for meeting assistant capabilities and a governance-by-design framework---the systematic integration of governance controls (consent, transparency, accountability, audit) into every stage of an AI system's architecture---for enterprise deployment. Our analysis identifies a growing ''capability--governance gap'': summarization quality is improving via LLMs, but organizational controls on autonomy and privacy lag behind. We identify open challenges in factuality, user trust, and cross-cultural consent, and argue that human oversight and clear policies are essential for ethical, lawful deployment of autonomous meeting AI.