A Caveat Regarding the Unfolding Argument: Implications of Plasticity for Computational Theories of Consciousness

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The unfolding argument in the neuroscience of consciousness posits that causal structure cannot account for consciousness because any recurrent neural network (RNN) can be “unfolded” into a functionally equivalent feedforward neural network (FNN) with identical input-output behavior. Subsequent debate has focused on dynamical properties and philosophy of science critiques. We examine a novel caveat to the unfolding argument for RNN systems with rapid plasticity in their connection weights. We demonstrate through rigorous mathematical proofs that plasticity negates the functional equivalence between RNN and FNN. Our proofs address history-dependent plasticity, dynamical systems analysis, information-theoretic considerations, perturbational stability, complexity growth, and resource limitations. We demonstrate that neuronal systems that possess properties such as plasticity, history-dependence, and complex temporal information encoding have features that cannot be captured by a static FNN. Our findings delineate limitations of the unfolding argument that apply if consciousness arises from temporally extended dynamic neural processes rather than static input-output mappings. This work has implications for theories of consciousness, and for the fields of computational neuroscience and philosophy of mind more generally, providing new constraints for theories of consciousness.

Article activity feed