The
Brain
Makers:
Genius, Ego and Greed in the Quest for
Machines that Think
by Harvey P. Newquist
Sams Publishing, Indianapolis,
Indiana, 1994 (488 pages)
(Possible overall title for review:
The Great 1980s AI Bubble)
From a gleam in Alan Turing's
eye in 1940s, machine intelligence took root as an academic discipline in the
1950s, was cultivated in the
1960s, matured in the 1970s, and broadcast seed in the 1980s, which took root and
bore fruit in a myriad companies in the 1990s. Newquist is a business reporter
who covered the field during the 1980s, when academic researchers went commercial,
in one of the 1980s' smaller speculative bubbles.
The book
begins with a history spanning Babbage to Turing to Minsky, McCarthy, Newell,
Simon, Samuel and others at the 1956 Dartmouth meeting and on to the 1980s, where
the real story begins. Good,
if glib, descriptions of people, places and events are punctuated by technical
explanations ranging from poor to inane. As I'm a little slow, it took me a
quarter of the book to recognize a journalist with an attitude. Only
laboratory geeks (variations on the term pepper the book) waste time on confusing
technical minutiae, real people skim executive summaries, then spend quality
time on the financials. The book never misses an opportunity to sneer at
academics, who always fail to grasp
this simple fact. By contrast, it fawns over "A team" executives, from the
best business schools, masters of management, finance and marketing, the core
of America's greatness. Since most of the characters encountered in the first
wave of AI businesses were researchers, sneering predominates over fawning.
The
author's aversion to places away from the executive suite distorts
the book's coverage. Information about Stanford University comes via companies
founded by Ed Feigenbaum. In
that perspective, John McCarthy's AI Lab in the Stanford hills in the 1970s is
a nonentity, despite the fact that is sparked a robot boom that foreshadowed the
AI boom by five years, as pioneering companies making small assembly robots
and industrial vision systems failed just as the robots themselves became essential
to manufacturing worldwide. Alan Newell's world-leading , but unmarketed,
reasoning program research at Carnegie Mellon, conducted vigorously through the
1980s as in decades before, is
dismissed. Companies and products oriented towards technical users, for instance
Stephen Wolfram's Mathematica, are completely overlooked. Research labs,
engineering offices and factory floors are simply not on Harvey Newquist's beat.
Within its one-sided scope, however, the book is pretty interesting.
The
book's subtitle lists genius, ego and greed, but naivete and dilettantism
deserve equal billing, since most of the book's antiheroes were as unprepared
for business as Newquist is for
technical comprehension. The first AI companies rushed to market academically
interesting but underdeveloped techniques with few applications or customers.
In the speculative hysteria of the 1980s, most found enough backing to support
lavish facilities and bloated staffs, lured from academia with inflated salaries
and promises. Symbolics, maker of Lisp machines invented by Richard Greenblatt,
was the major player. It had several successful years in the mid 1980s
selling machines to research groups
and secondary AI companies and their immediate clients. The secondary companies
sold applications-oriented or generic expert systems at exorbitant prices.
The
bubble developed fatal leaks by the end of the decade, when rapidly
evolving cheap Unix workstations started to run Lisp code as well as Lisp
machines, followed a few years later by personal computers and when expert systems
began to be written in C and other non-Lisp languages, when client companies
found they could implement
their applications in-house at lower cost. Symbolics, Palladin, Intellicorp, Teknowledge,
Gold Hill and smaller companies, as well as the decade-long Japanese
fifth-generation project, evaporated with the 1980s, leaving behind a few viable
puddles. Symbolics became Macsyma, Inc., marketing a symbolic mathematics
program that was once a minor product. Teknowledge was reduced to a small division.
Hundreds of traditional companies now use AI techniques in house, for geological
exploration, financial
decision making, medical advice and factory scheduling to mechanical troubleshooting.
The expert systems' hype made its successes look like failures
in comparison, giving young turks with a competing approach an opportunity
to overreact. Biologically-inspired neural nets, which learn input-output relationships
from examples lost to AI in 1965, when Minsky and Pappert's Perceptrons
proved fundamental limitations in two-layer nets. In 1983 John
Hopfield showed how to train
three-layer nets, and soon enthusiasts were claiming neural nets would deliver
AI's failed promises. Nets were better in data-intensive applications,
like detecting credit fraud or classifying sensor data, but were unsuitable
for long chains of inference, and the restrained economy of the 1990s has moderated
attempts to imitate the excesses of the AI bubble.
Meanwhile,
speech and text recognizers, vision, control, and database systems, automatic
design aids and other successful
programs infer and learn using dozens of powerful methods that are neither
expert systems nor neural nets. In coming decades, machine intelligence will become
commonplace as computers grow in power and ubiquity--a momentous transition
that will be conducted more slowly, more thoughtfully and more quietly than the
1980's party. At least, the neighbors hope so.
_________________________________________________________
Hans
Moravec is a Principal Research
Scientist with the Robotics Institute
of Carnegie Mellon University. He has been developing spatial perception
for mobile robots for two decades, and promises practical results by the end
of the third. He is author of Mind Children: The Future of Robot
and Human Intelligence (Harvard 1988) and the forthcoming Mind
Age: Transcendence through Robots (Bantam 1995).