Predictive processing, Neurorepresentationalism and the trouble with Computational Functionalism
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Taking phenomenal consciousness as explanandum, we first circumscribe it using five hallmarks. Fundamentally, conscious experience corresponds to generating a best-guess representation of what the sensory inputs the wakeful brain receives are about. This process can be computationally approached with predictive processing (PP): the brain constantly generates inferential, predictive representations to ‘explain away’ its sensory inputs. Empirical evidence supports the use of PP by the brain, yet it is not deemed sufficient to explain consciousness. We review two theories of consciousness using PP as building block: active inference theory and neurorepresentationalism. When witnessing computer simulations of PP and other computational models, they remain strikingly unconvincing as instances that might emulate consciousness. Considered in the context of computational functionalism, we argue that computational models suffer, by definition, from the ‘Problem of Numbers’: it does lie within the purview of mathematics to study numerical operations, but not to reveal all qualitative properties characteristic of consciousness. We argue that low-level neural phenomena, e.g. spike trains, can be captured by mathematics, but we need different descriptors and tools to study their emergent correspondence to high-level, conscious phenomena. These considerations also lead us to outline Indicators of Consciousness as applicable to AIs and robots.