The Trouble with GenAI: LLMs are still not any close to AGI. They will never be
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The pursuit of Artificial General Intelligence (AGI) has been a prominent goal within the field of artificial intelligence. However, this paper argues that current Generative AI Language Models (GenAI LLMs), such as GPT-4 o1, and similar/later LLMs with similar architectures like o3, are fundamentally incapable of achieving AGI. This argument is supported by examining the intrinsic limitations of LLMs, their operational paradigms, and the essential characteristics that define AGI.We discuss a short experiment performed with all the big LLMs, including the latest ones released by the main different AI providers: extracting and producing a list of URL links from a word document. None of the LLMs succeeded, including the latest from OpenAI, Google, Claude or Perplexity. Instead they all get confused, extract only a subset then, when shown how to do it, they hallucinate the links and never produce a complete list. It happens even when shown how to do it. We take this as a counterexample to statements made by many that, by now, end of 2024, GenAI LLMs would, already reach AGI, or be almost there. In fact we argue that AGI is not about to be reached by LLMs any time soon. They will never reach AGI, without changes away from just being LLMS. Claims to the contrary are unrealistic.The paper presents possible direction to reach AGI, and in particulars our views on how to proceed.