Existence as Ascription

Hans Moravec, 1999

Chapter 7 of Robot was my first presentation of a surprising chain of reasoning. I wanted to rewrite it, but didn't have the energy in time for publication. Now that the pressure is off, and my visceral comfort with the ideas has risen, I'd like to present them more compellingly. This piece is a start.

A decade of net newsgroup philosophical debates on phenomenology (or subjective vs. objective reality) forced me to think hard about these issues, though in hindsight I see roots decades older.

Start with the premise (A) that properly designed minds implemented in computers can have conscious experiences just like minds implemented in flesh. Also assume (B) that experiences of rich virtual worlds can be as vivid as experiences of the physical world. Immersive video games make the second premise non-controversial. Materialistic accounts of the evolution of life and intelligence, providing a rough roadmap for the evolution of machine intelligence, make the first premise compelling to AI guys like me. (Also, Occamesque, it demands no mysterious special new ingredients to make consciousness.)

Let AI = Artificial Intelligence and VR = Virtual Reality.
Combine the two halves of both premises into four cases:
1) a flesh human in the physical world.
2) a conscious AI controlling a physical robot.
3) a human immersed in a VR, maybe by neural interface.
4) a conscious AI linked to a VR, all inside one computer.

Case 4 is a handle on the subjective/objective problem that was not available to past philosophers. Unlike flesh, dreams, stories, sensation-controlling demons or divine ideas, it is nearly free of slippery unstated assumptions about human minds or physical reality. On the outside, we have a simple objective device stepping through states. Yet, on the inside, there is a subjective mind experiencing its own existence.

What connects the internal experience to the external mechanism? As in any simulation, it is an interpretation. Storage locations can be viewed as representing bit patterns, numbers, text, pressures, temperatures, sensations, moods, beliefs, feelings or more abstract relationships. In general, different observers will have different interpretations. Someone looking at the simulation trying to improve memory management in the operating system will likely put a different interpretation on the memory contents than someone wanting to view life in the simulation, or to talk with its inhabitant.

But does the AI cease to exist if there is no one outside who happens to have the correct interpretation to see it? Suppose an experimenter sets up an AI/VR, and builds a translating box allowing him to plug in and talk with the AI. But on the way home, the experimenter is killed and the translating box destroyed. The computer continues to run, but no one suspects it holds a living, feeling being. Does the AI cease to be? Suppose one day enough of the experimenter's notes are found and a new translating box is built and attached. The rediscovered AI then tells a long story about its life in the interval when it was unobserved.

My take on this is that there is an observer of the AI even when it goes unobserved from the outside, namely the AI itself. By interpreting some process inside the box as a conscious observer, we grant that process the power of making observations about itself. That self-interpretation exists in its own right whether or not someone outside ever appreciates it. But once you allow externally undiscovered interpretations of AI's that exist only in their own eyes, you open the door to all possible interpretations which contain self-aware observers. Which is fine by me. I think this universe is just such a self-interpretation, one self-defining subjective thread in an infinity or alternatives that are just as real to their inhabitants.

To be continued . . .