Structure Complexity Entropy: A Principle for Structural Intelligence in AI
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human intelligence excels at decomposing complex information into meaningful hierarchical structures, yet this capability remains a fundamental challenge for artificial intelligence. Current approaches, from manual curation to automated clustering, lack a principled way to quantify and autonomously discover such structures, often resulting in rigid binaries or semantically incoherent groupings. Here, we introduce Structural Complexity Entropy (SCEntropy), a novel metric that quantifies the internal disorder of an information set by measuring the heterogeneity of all pairwise relationships. SCEntropy provides AI with a foundational principle: a low value signifies a coherent concept, whereas a high value signals the need for decomposition. Leveraging this, we develop SCEntropy-driven Hierarchical Clustering (SHC), an algorithm that uses a single complexity threshold to autonomously construct multi-branch, semantically coherent hierarchies from the bottom up. We validate this principle across two core domains. In visual concept discovery, SHC not only recovers known taxonomies in image datasets like CIFAR-100 but also discovers semantically coherent and human-aligned super-categories, demonstrating an ability for autonomous knowledge structuring. Conversely, in natural language generation, we show that SCEntropy-derived hierarchies serve as a scaffold for coherent reasoning. By structurally constraining large language models, we enhance thematic focus and logical flow in multi-turn dialogues, mitigating semantic drift. Our work establishes SCEntropy as a universal framework for structural machine intelligence, enabling AI to not only discover how the world is organized but also to discipline its own internal processes, paving the way for more autonomous and interpretable systems.