Adaptive Sparse Dense Communication Network for Multi-Agent LLMs
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Multi-agent LLM systems face communication bottlenecks using natural language tokens, lacking end-to-end differentiability. While dense vector communication helps, existing methods are inflexible due to fixed topologies and static transformations. We propose the Adaptive Sparse Dense Communication Network (ASDNet), a novel framework for efficient, flexible, and context-aware dense communication. ASDNet employs a dynamic Communication Hub per agent, intelligently selecting sparse partners and adaptively generating optimal dense vector transformations. This end-to-end differentiable architecture enables joint optimization of communication and inference. Experiments with an ASDNet variant, built on a foundational LLM, demonstrate consistent outperformance against state-of-the-art dense communication baselines and other open-source LLMs across diverse benchmarks, with efficient training. Ablation studies confirm dynamic target selection and adaptive transformations are critical. Further analyses highlight ASDNet's enhanced efficiency, superior qualitative outputs, and robust low-data performance, showcasing its potential for scalable multi-agent collaboration.