Can AI Ever Become Conscious?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Almost 70 years ago, Alan Turing predicted that within half a century, computers would possess processing capabilities sufficient to fool interrogators into believing they were communicating with a human. While his prediction materialized slightly later than anticipated, he also foresaw a critical limitation: machines might never become the subject of their own thoughts, suggesting that computers may never achieve self-awareness. Recent advancements in AI, however, have reignited interest in the concept of consciousness, particularly in discussions about the potential existential risks posed by AI. At the heart of this debate lies the question of whether computers can achieve consciousness or develop a sense of agency—and the profound implications if they do. Whether computers can currently be considered conscious or aware, even to a limited extent, depends largely on the framework used to define awareness and consciousness. For instance, IIT equates consciousness with the capacity for information processing, while the Higher-Order Thought (HOT) theory integrates elements of self-awareness and intentionality into its definition. This manuscript reviews and critically compares major theories of consciousness, with a particular emphasis on awareness, attention, and the sense of self. By delineating the distinctions between artificial and natural intelligence, it explores whether advancements in AI technologies—such as machine learning and neural networks—could enable AI to achieve some degree of consciousness or develop a sense of agency.

Article activity feed