How To Build Conscious Machines

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

How to build a conscious machine? For that matter, what is consciousness? Why is my world made of qualia like the colour red or the smell of coffee? Are these fundamental building blocks of reality, or can I break them down into something more basic? If so, that suggests qualia are like an abstraction layer in a computer. A simplification. Some say simplicity is the key to intelligence. Systems which prefer simpler models need fewer resources to adapt. They ``generalise'' better. Yet simplicity is a property of form. Generalisation is of function. Any correlation between them depends on interpretation. In theory there could be no correlation and yet in practice, there is. Why? Software depends on the hardware that interprets it. It is made of abstraction layers, each interpreted by the layer below. I argue hardware is just another layer. As software is interpreted by hardware, hardware is by physics. There is no way to know where the stack ends. Hence I formalise an infinite stack of layers to describe all possible worlds. Each layer embodies policies that constrain possible worlds. A task is the worlds in which it is completed. Adaptive systems are abstraction layers are polycomputers, and a policy simultaneously completes more than one task. When the environment changes state, a subset of tasks are completed. This is the cosmic ought from which goal-directed behaviour emerges (e.g. natural selection). ``Simp-maxing'' systems prefer simpler policies, and ``w-maxing'' systems choose weaker constraints on possible worlds. I show w-maxing maximises generalisation, proving an upper bound on intelligence. I show all policies can take equally simple forms. Simp-maxing shouldn't work. To explain why it does, I invoke the Bekenstein bound. It means layers can use only finite subsets of all possible forms. Processes that favour generalisation (e.g. natural selection) will then make weak constraints take simple forms. I perform experiments. W-maxing generalises at 110-500% the rate of simp-maxing. I formalise how systems delegate adaptation down their stacks. I show w-maxing will simp-max if control is infinitely delegated. Biological systems are more adaptable than artificial because they delegate adaptation further down. They are bioelectric polycomputers. As they scale from cells to organs, they go from simple attraction and repulsion to rich tapestries of valence. These tapestries classify objects and properties that cause valence, which I call causal-identities. I propose the psychophysical principle of causality arguing qualia are tapestries of valence. A vast orchestra of cells play a symphony of valence, classifying and judging. A system can learn 1ST, 2ND and higher order tapestries for itself. Phenomenal ``what it is like'' consciousness begins at 1ST-order-self. Conscious access for communication begins at 2ND-order-selves, making philosophical zombies impossible. This links intelligence and consciousness. So why do we have the qualia we do? A stable environment is a layer where systems can w-max without simp-maxing. Stacks can then grow tall and complex. This may shed light on the origins of life and the Fermi paradox. Diverse intelligences could be everywhere, but we cannot perceive them because they do not meet preconditions for a causal-identity afforded by our stack. I conclude by integrating all this to explain how to build a conscious machine, and a problem I call The Temporal Gap.

Article activity feed