The Hard Problems of AI
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
There is currently an enlivened debate regarding the possibility of AI consciousness and/or sentience, as well as arguably more partial capabilities we associate with consciousness such as intelligence or creativity. The debate itself can be traced back to the inception of computing, but its current revitalisation is powered by recent advancements in the field of artificial intelligence that saw a swift increase in its capabilities to act in seemingly human-like ways. I argue that the debate is methodologically flawed, as it approaches the question of AI consciousness, intelligence etc. as a decidable question dealing with matters of fact. Those engaged in the debate are driven by a desire to find a suitable definition of e.g. consciousness that would allow them to definitively settle the question of whether a particular AI system is conscious. However, drawing on Ludwig Wittgenstein’s later philosophy, I argue that no such definition exists, because the predicates in question are inherently vague (meaning that any verdicts they yield are bound to be vague, too). Moreover, the impression that we might be dealing with directly unobservable matters of fact is itself a flawed generalisation of the practice of observation reports to the practice of sensation reports[1]. In reality, third-person consciousness (sentience, agency etc.) attributions are independent of a stipulated internal process happening inside those persons (or systems, in the case of AI). Therefore, the only sense in which the question of e.g. AI consciousness can be meaningfully asked is a pragmatic sense: what is it best to _think of such systems as? _But this question is subject so sociological and psychological factors, not conceptual ones. Therefore it cannot be decided by the aforementioned strategies.