Letter re. "Hello, HAL", NYT Book Review, 3 Jan 1999

Hans Moravec, January 7, 1999

To: letters@nytimes.com
Subject: "Hello, HAL" (Book Review, 3 Jan 1999, p11)
From: Hans Moravec 
Reply-to: hpm@cmu.edu

To the Editor:

Colin McGinn's comments in the January 3 Book Review reveal a chasm
between traditional western Philosophy of Mind and the emerging
Sciences of Mind.

McGinn decrees John Searle's "Chinese Room", wherein a human follows
specially contrived rote rules to conduct an intelligent conversation
without understanding it, to be a "devastating" argument against
machine understanding.  To computer scientists the argument is
absurd. It would take a human maybe 50,000 years of rote work and
billions of scratch notes to generate each second of genuinely
intelligent conversation by this means, working as a cog in a vast
paper machine.  The understanding the room exhibits would be encoded
in the changing pattern of symbols in that paper mountain.  So what
that it is not duplicated in the usual way in the brain of the human?

Philosophers like McGinn and Searle seemingly cannot accept that real
meaning can be found in mere patterns.  But such attributions are
essential to computer scientists and mathematicians, who daily work
with mappings between different physical and symbolic structures. One
day a computer memory pattern means a number, another it is a string
of text or a snippet of sound or a patch of picture.  When running a
weather simulation it may be a pressure or a humidity, and in a robot
program it may be a belief, a goal or a state of alertness.  Cognitive
biologists, too, think this way as they accumulate evidence that
sensations, feelings, beliefs, thoughts and other elements of
consciousness are encoded as distributed patterns of activity in the
nervous system.  Scientifically-oriented philosophers like Daniel
Dennett are running successfully with this approach.

Hilary Putnam had a more interesting objection.  If thoughts,
feelings, meaning and consciousness are found by interpreting the
activity pattern in a human or robot brain in just the right way, then
wouldn't there also be interpretations that find such things in less
traditional places, for instance, in the patterns of particle motion
of arbitrary rocks?  Surprising further consequences of this
conclusion are explored in my book's last chapter, which McGinn found
"bizarre and incomprehensible".  Having rejected step 1 of the
argument, I suppose it is not surprising that he had problems with
steps 2, 3 and 4!  Putnam, too, found his conclusion impossibly
counterintuitive, and turned his back on the whole logical chain.  But
today, when millions of 3D videogame players immerse themselves in
increasingly expansive and populated worlds found in very special
interpretations of the particle motions of a few unimpressive-looking
silicon chips, is the idea of whole worlds hidden in unexpected places
still beyond the pale?

     Hans Moravec
     Robotics Institute
     Carnegie Mellon University
     Pittsburgh, PA  15213