Statistical approximation is not general intelligence

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Rumors that humanity has already achieved artificial general intelligence (AGI) have been greatly exaggerated. Such rumors are often fueled by recent advances in large language models (LLMs), whose outputs show strong benchmark performance, high fluency across domains, and, in some cases, correct solutions to open problems in mathematics. These developments are often taken as evidence that general intelligence has been achieved. Such interpretations rest on a fundamental confusion between performance on individual, often well-known tasks and intelligence writ large. Task-level performance, even when impressive, is not sufficient evidence of general intelligence. Here, we argue that recent claims of achieving AGI rest on a conceptual error: conflating increasingly sophisticated statistical approximations with intelligence itself. We also show that recent claims about putative success on AGI hinge on redefining what AGI has historically meant.

Article activity feed