Chess as a Model of Collective Intelligence: analyzing a distributed form of chess with piece-wise agency

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Chess is a much-studied virtual world in which human and artificially-intelligent players move pieces toward desired ends, within established rules. The typical scenario involves top-down control where a single cognitive agent plans and executes moves using the pieces as its embodiment within the chess universe. However, ultimately both biological and engineered agents are composed of parts, with radically differing degrees of competency. The emerging field of Diverse Intelligence seeks to understand how coherent behavior and goal-directed navigation of problem spaces arises in compound agents from the interaction of their simpler components. Thus, we explored the world of chess rules from the perspective of collective intelligence, and characterized a bottom-up version of this classic game in which there is no central controller or long-term planning. Rather, each individual piece has its own drives and makes decisions based on local, limited information and its own goals. We analyzed the behavior of this distributed agent when playing against Stockfish, a standard chess algorithm. We tested a few individual policies designed by hand, and then implemented an evolutionary algorithm to see how the individuals’ behavioral genomes would evolve under selection applied to the chess-based fitness of the collective agent. We observed that despite the minimal intelligence of each piece, the team of distributed chess pieces exhibit Elo of up to ~1050, equivalent to a novice human chess player. And, compared to advanced chess engines like Stockfish, the distributed chess pieces are significantly more efficient in computing. Distributed chess pieces select their next move approximately 7 times faster than the Stockfish Engine with a search depth of 8. Investigating different local policies for the distributed agents, we found that policies promoting offense, such as swarming the opposing king and opposing highest valued piece, moving less cautiously, and a radius of vision of 4 spaces yields optimal performance. Comparisons between centralized and distributed versions of familiar minimal environments have the potential to shed light on the scaling of cognition and the requirements for collective intelligence in naturally evolved and engineered systems.

Article activity feed