Social Learning Dynamics in Multi-Agent Systems: A Framework for Collective Knowledge Building
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Complex and dynamic environments often require collective intelligence where many autonomous agents cooperate to find solutions and maximize group utility. One of the critical challenges of MAS is how to achieve emergent cooperation with efficient knowledge diffusion in situations where agents have only limited local information or when inherent social dilemmas exist. This paper presents a novel Social Learning Framework that enables Collective Knowledge Building in decentralized multi-agent systems, thereby addressing the limitations of purely self-interested reinforcement learning methods. While autonomous, agents in this framework make use of social information to enhance decision-making and hasten the learning process. Each agent observes the strategies and the performance outcomes of its peers rather than relying solely on independent trial-and-error learning and builds internal models of rewarding behaviors present within the environment. Further, agents selectively adopt high-performing policies demonstrated by others through a selective imitation mechanism that enables them to adapt and improve their capabilities at a faster pace while avoiding inefficient learning trajectories. Additionally, a shared knowledge aggregation process is established within the framework, wherein the validated and effective local experiences are aggregated into a collective knowledge base or common policy. This shared repository continuously evolves during the progress of the system and allows agents to adapt based not only on their direct interactions with the environment but also on the emerging collective intelligence within the group. By promoting cooperation and strategic imitation, the proposed approach allows for an enhanced integration of social learning principles into decentralized MAS and gives rise to robust, scalable, and adaptive collective intelligence. We show through empirical evaluation across a range of cooperative and sequential social dilemma environments that the proposed framework significantly improves convergence towards the optimal collective performance and yields higher long-term stability than purely independent or centralized learning. The present work provides a robust, scalable basis for the engineering of AI societies that can effectively construct, maintain, and leverage collective knowledge in order to solve complex real-world problems.