Lies, Damned Lies, and the Orthogonality Thesis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In AI safety, the orthogonality thesis is that intelligence and goals are independent. Here I refute it with a rudimentary proof and argument based on computational dualism. First I show intelligence is fundamentally tied to embodiment. I illustrate using the universal artificial intelligence AIXI. Its performance hinges upon a choice of Universal Turing Machine (UTM). This UTM is a form of embodiment. It interprets and thus determines everything AIXI does, meaning AIXI can be made to behave arbitrarily well or poorly by changing the UTM. This is the case for all agents, not just AIXI. Next, I show embodiment is not neutral but inherently goal directed. A body is biased toward some goals over others. Just as every policy can be optimal if we choose the right body, every body can be optimal if we choose the right goal. This connects intelligence to embodiment to goals. They are not independent. The orthogonality thesis is a case of computational dualism.

Article activity feed