The Pragmatic AGI Era: Recognizing the General Intelligence Unlocked by Transformers
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Is Artificial General Intelligence (AGI) still a distant dream? This paper argues that it is not, provided it is defined pragmatically and functionally. Setting aside the popular confusion with Artificial Superintelligence (ASI) and the unrealistic demand of replicating human intelligence (NGI) in all its details, it can be seen that a form of AGI is already emerging, fundamentally driven by the Transformer architecture. I contend that the common idea of AGI – capable of doing anything a human can do – creates an unattainable goal that blinds us to real progress. Instead, I propose focusing on essential general cognitive capabilities: flexible learning, reasoning about complex problems, adaptation, and generalization across different information domains. I argue that advanced LLMs (like GPT-4), based on Transformers, already exhibit functional patterns in these areas that begin to align with what is seen in average individual human intelligence, and notably, that the Transformer architecture's generality extends beyond text to other modalities. Why, then, does this AGI seem "hidden"? I attribute this partly to inflated expectations (the focus on ASI), the gradual nature of the technology, measurement difficulties, the artificial vs. natural intelligence gap, and potentially inconsistent benchmarks compared to how non-human natural intelligence is assessed. I conclude that the paper "Attention Is All You Need" was the true architectural starting point, and that the field is now navigating the initial phases of this new era of pragmatic AGI, demanding a critical look at both its capabilities and its limitations.