Artificial Intelligence Like Humans; Humans Like AI: Epistemology of Analogy and Our Expectations Beyond It
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In this paper – which has in its background a semi-joking smile – I propose an optimistic image of Artificial Intelligence (AI) considered in its plausible inherent development and _future _as a _new cognitive entity, that is, a new thinking entity_. This proposed thesis is the result of an _epistemological_ approach that emphasises the common/shared role of _analogy_ in both human cognition and AI's inferential response to its environment. In turn, the stages of analogies in physics highlight the contradictory beingness of AI, but this contradictory beingness is not specific only to AI, even though that of humans is of a different nature. Anyway, AI’s efficiency is precisely the result of its larger field of data and information for analogy, and thus of its _much better_ answers to the problems of the world. But could this larger field not also be the basis of better human knowledge and values as reasons-to-be for actions? Of course, the scope of judgements reflects “the input”, information as the object on which they are exercised. Accordingly, and conversely to the present banal approach of AI as a copy of the human, AI can be a model for the treatment of humans by humans. So, as in billiards, in this paper the focus on the epistemic features and role of analogy in cognition is only a way to support the meanings of human access to information. However, if the critical spirit, as a result of the free access to information for all humans, highlights the problem of what marvellous things they can do on this basis, the development of AI on the foundation of humans’ free analogy opens questions related to its existence alongside its creators.