Mobile Robots Hans P. Moravec Robotics Institute Carnegie-Mellon University Pittsburgh, PA 15213 October 15, 1984 revised November 21, 1984 @subheading(Introduction) It is a comment on the state of development of mobile robots that several robot industry societies define "robot" to mean robot manipulator. When Unimation began to manufacture programmable arms for spray painting, spot welding and parts transfer in the early 1960s the most advanced automatic mobile machines were laboroatory devices that resembled toys. The Johns Hopkins University "Beast", for instance, wandered along halls, keeping itself from the walls by ultrasonic range measurements. An optical system searched for the distinctive black cover plate of wall outlets, and whenever it found one the robot tried to plug in its special arm to recharge its batteries. The lead held by the robot arms has been maintained through twenty years of research in the artificial intelligence labs. The AI workers attempted to link programs that reasoned with programs that interpreted data from cameras and microphones with programs that controlled arms and mobile platforms. While the arms were assembling structures from children's blocks, one mobile robot followed a white line and another used used simplified blocks techniques to peek at the artificially simple world in which it operated. Now that vision systems for manipulators are earning their keep by locating and identifying parts for assembly, an experimental mobile robot finds it way across a room in an hour. Despite the slow progress, mobile machinery, toys or laboratory experiments, hold a unique fascination for most observers. They somehow seem more alive than fixed devices. The most consistently interesting stories are those about journeys, and the most fascinating organisms are those that move from place to place. These observations are more than idiosyncrasies of human psychology, but illustrate a fundamental principle. The world at large has great diversity, and a traveller constantly encounters novel circumstances, and is consequently challenged to respond in new ways. Organisms and mechanisms do not exist in isolation, but are systems with their environments, and those on the prowl in general have a richer environment than those rooted to one place. Mobility supplies danger along with excitement. Inappropriate actions or lack of well-timed appropriate ones can result in the demise of a free roamer, say over the edge of a cliff, far more easily than of a stationary entity for whom particular actions are more likely to have fixed effects. Challenge combines with opportunity in a strong selection pressure that drives an evolving species that happens to find itself in a mobile way of life in certain directions, directions quite different from those of stationary organisms. The last billion years on the surface of the earth has seen a grand experiment exploring these pressures. Besides the fortunate consequence of our own existence, some universals are apparent from the results to date and from the record. In particular, intelligence seems to follow from mobility. The same pressures seem to be at work in the technological evolution of robots and it may be that mobile robots are the best route to solutions to some of the most vexing unsolved problems on the way to true artificial intelligence - problems such as how to program common sense reasoning and learning from sensory experience. This opportunity carries a price - programs to control mobile robots are more difficult to get right than most - the robot is free to search the diverse world looking for just the combination that will foil the plans of its designers. There's still a long way to go. @subheading(Mobile Robots in Industry) Some simple mobile robots have made it in the factory, the warehouse and the office. The systems are manufactured by many small companies and small divisions of large companies. Many frozen food warehouses, difficult for human workers because of the arctic conditions, are stocked and emptied by automatic fork lift trucks. The trucks are co-ordinated by a central computer, but their local sensors are simple. They find their way by following the oscillating magnetic fields of a grid of buried guide wires, and guide their final dockings with pallets of produce with short range infrared proximity sensors. Unexpected obstacles or other accidents must be solved by human intervention. The corridors of some offices and hospitals are travelled by even simpler robots. These self-contained machines look like a large tea carts, and follow a spray painted trail on the floor. The paint is transparent but fluoresces for detection by photosensors in the glow of an ultraviolet light under the robot. They are used to collect mail from different offices, or to deliver linnens. They beep softly but warningly as they move, and stop on colliding with anything. Several Japanese companies have had in operation fully automated factories that can function for periods without human involvement. An American plant being built by General Motors Corp., to start operation in 1985, is representative of the state of this art. The plant will be an automated, highly flexible manufacturing complex that can operate for an eight-hour shift without any human production workers, controlled by a master computer. It will machine and assemble a family of axles for different models of cars. It will produce the complex front axles used on modern front-wheel-drive cars. Unlike the relatively simple axles on older rear-wheel-drive models, those used in front-wheel-drive cars must allow the wheels to steer and move up and down, as well as propel the vehicle. About 50 robots will move parts within 40 manufacturing and assembly cells. Driverless carts, rectangular metal boxes on wheels following signals from wires buried in the concrete floors, will move parts between cells and will transport finished products to shipping areas. Even floor-sweeping will be automated. The plant will be controlled by a number of computers. Machines will be able to adjust to parts of different size in minutes and a central computer will be able to change a machine's functions to continue production if another breaks down. The computer will also keep records, manage inventory and order raw materials. Human workers will still be needed for maintenance and other tasks that require greater skills. There has been industry interest and experimentation in several more advanced (and risky) mobile robot ideas. Various driverless automobile systems have been built since the 1950s. The early ones used buried guide wires, but recent experimental systems have been built that rely on navigational beacons and radar. A research project of the Japanese ministry of highways demonstrated a computer controlled car that used a pair of TV camers mounted one above the other on the front bumper to drive on a roadway, tracking raised road edges and swerving around obstacles detected stereoscopically by the cameras. A Japanese earth moving machinery manufacturer has demonstrated an automatic truck driving system that can be programmed to travel a route marked by scanning laser beacons. None of these systems work well enough to be trusted outside of carefully monitored experiments. One advanced idea that may bear fruit in the near future is that of the robot security guard. One American company, Denning Mobile Robotics, is developing a machine that will wander the hallways of a prison, a warehouse or a large vault, stopping from time to time to listen with doppler motion sensors, infrared heat and motion detectors and other means for signs of human activity. It will be linked by radio to a base station that reports its findings like a fixed burglar alarm system. The advantages over a fixed system would be cheaper installation cost, and ability to patrol areas difficult to cover with stationary sensors. The hardest problem is providing the robot with a means of reliably navigating in a potentially cluttered environment. The system uses a large number of sonar range measuring devices to build a map in its computers of the volume surrounding it. The map is used to plan motions past obstacles and also to identify key intersections and to orient within them, giving the robot a sense of its location in the building's floor plan. @subheading(Mobile Robots in the Laboratory) Until the middle 1960s computers were simply too rare and expensive to be used with something so frivolous as robots. A number of interesting mobile machines demonstrating various principals using specialized circuitry were nevertheless built in research research labs. Sarting in about 1950 W. Grey Walter, a British psychologist, built a series of electronic turtles, with subminiature tube electronics, that demonstrated behaviors resembling that of simple animals. The first versions used rotating phototubes to locate and home on light sources, including one in a "recharging hutch", and responded to pressure on its shell with an avoidance reaction. Groups of such machines exhibited complex social behavior by responding to each other's control lights and touches. He followed this with a more advanced machine also able to respond to light (by heading towards it), touch (by avoiding it) or a loud noise (by playing dead for a few seconds). Amazingly, it could be conditioned to associate one stimulus with another. For instance, after repeatedly being subjected to a loud noise followed by a kick to its shell, the robot would begin to execute an avoidance maneuver on hearing a noise. The association was represented in the machine by a charge in a capacitor. The Hopkins' beast mentioned in the introduction that wandered the halls of Johns Hopkins University looking for electical outlets in the ealry 1960s inspired a number of imitators at other Universities. Some of these used special circuits connected to TV cameras instead of photocells, and were controlled by assemblies of (then new) transistor digital logic gates. Some added new motions such as "shake to untangle arm" to the repertoire of basic actions. The first serious attempts to link computers to robots involved hand-eye systems, wherein a computer-interfaced camera looked down at a table where a mechanical manipulator operated. The earliest of these (ca. 1965) were built while the small community of artificial intelligence researchers was still flushed with the success of the original AI programs - programs that almost on the first try played games, proved mathematical theorems and solved problems in narrow domains nearly as well as humans. The robot systems were seen as providing a richer medium for these thought processors. Some new problems arose. A picture from a camera can be represented in a computer as a rectangular array of numbers, each representing the shade of gray or the color of a point in the image. A good quality picture requires a million such numbers. Identifying people, trees, doors, screwdrivers and teacups in such an undifferentiated mass of numbers is a formidable problem - the first programs did not attempt it. Instead they were restricted to working with bright cubical blocks on a dark tabletop; a caricature of a toddler learning hand-eye co-ordination. In this simplified environment computers more powerful than those that had earlier aced chess, geometry and calculus problems, combined with larger, more developed, programs were able to sometimes, with luck, correctly locate and grab a block. The methods developed became known as blocks-world vision. The general hand-eye systems have now mostly evolved into experiments to study smaller parts of the problem, for example dynamics or force feedback, or into specialized systems for industrial applications. Most arm systems have special grippers, special sensors, and vision systems and controllers that work only in limited domains. Economics favors this, since a fixed arm, say on an assembly line, repetitively encounters nearly identical conditions. Methods that handle the frequent situations with maximum efficiency beat more expensive general methods that deal with a wide range of circumstances that rarely arise, while performing less well on the common cases. Shortly after cameras and arms were attached to computers, a few experiments with computer controlled mobile robots were begun. The practical problems of instrumenting and keeping operational a remote controlled, battery powered, camera and video transmitter toting vehicle compounded the already severe practical problems with hand-eye systems, and conspired to keep many potential players out of the game. The earliest successful result was Stanford Research Institute's Shakey (ca. 1970). Although it existed as a sometimes functional physical robot, Shakey's primary impact was as a thought experiment. Its creators were of the first wave "reasoning machine" branch of AI, and were interested primarily in applying logic based problem solving methods to a real world task. Control and seeing were treated as system functions of the robot and relegated mostly to staff engineers and undergraduates. Shakey physically ran very rarely, and its simple blocks-world vision system, which required that its environment contain only clean walls and a few large smooth prismatic objects, was coded inefficiently and ran very slowly, taking about an hour to find a block and a ramp in a simple scene. Shakey's most impressive performance, physically executed only piecemeal, was to "push the block" in a situation where it found the block on a platform. The sequence of actions included finding a wedge that could serve as a ramp, pushing it against the platform, then driving up the ramp onto the platform to push the block off. The problems of a mobile robot, even in this constrained an environment inspired and required the development of a powerful, effective, still unmatched, system STRIPS that constructed plans for robot tasks. STRIPS' plans were constructed out of primitive robot actions, each having preconditions for applicability and consequences on completion. It could recover from unexpected glitches by incremental replanning. The unexpected is a major distinguishing feature of the world of a mobile entity, and is one of the evolutionary pressures that channels the mobile towards intelligence. Mobile robots have other requirements that guide the evolution of their minds away from solutions seemingly suitable for fixed manipulators. Simple visual shape recognition methods are of little use to a machine that travels through a cluttered three dimensional world. Precision mechanical control of position can't be achieved by a vehicle that traverses rough ground. Special grippers don't pay off when many different and unexpected objects must be handled. Linear algorithmic control systems are not adequate for a rover that often encounters surprises in its wanderings. The Stanford University (distinct from Stanford Research Institute) Cart was a mobile robot built about the same time as Shakey, on a lower budget. From the start the emphasis of the Cart project was on low level perception and control rather than planning, and the Cart was actively used as a physical experimental testbed to guide the research. Until its retirement in 1980 it (actually the large mainframe computer that remote controlled it) was programmed to: @begin(itemize) Follow a white line in real time using a TV camera mounted at about eye level on the robot. The program had to find the line in a scene that contained a lot of extraneous imagery, and could afford to digitize only a selected portion of the images it processed. Travel down a road in straight lines using points on the horizon as references for its compass heading (the cart carried no instrumentation of any kind other than the TV camera). The program drove it in bursts of one to ten meters, punctuated by 15 second pauses to think about the images and plan the next move. Go to desired destinations about 20 meters away (specified as so many meters forward and so many to the left) through messy obstacle courses of arbitrary objects, using the images from the camera to servo the motion and to detect (and avoid) obstacles in three dimensions. With this program the robot moved in meter long steps, thinking about 15 @i(minutes) before each one. Crossing a large room or a loading dock took about five hours, the lifetime of a charge on the Cart's batteries. @end(itemize) The vision, world representation and planning methods that ultimately worked for the Cart were quite different than the "blocks world" and specialized industrial vision methods that grew out of the hand-eye efforts. Blocks world vision was completely inappropriate for the natural indoor and outdoor scenes encountered by the robot. Much experimentation with the Cart eliminated several other initially promising approaches that were insufficiently reliable when fed voluminous and variable data from the robot. The product was a vision system with a different flavor than most. It was "low level" in that it did no object modelling, but by exploiting overlapping redundancies it could map its surroundings in 3D reliably from noisy and uncertain data. The reliability was necessary because Cart journeys consisted of typically twenty moves each a meter long punctuated by vision steps, and each step had to be accurate for the journey to succeed. The Cart research is being continued at Carnegie-Mellon University with (so far) four different robots optimized for different parts of the effort. Pluto, the first robot, was designed for maximum generality - its wheel system is omnidirectional, allowing motion in any direction while simultaneously permitting the robot to spin like a skater. It was planned that Pluto would continue the line of vision research of the Cart and also support work in close-up navigation with a manipulator (a model task is a fully visually guided procedure that permits the robot to find, open and pass through a door). The real world has changed the plans. The problem of controlling the three independently steerable and driveable wheel assemblies of Pluto is an example of a difficult, so far unsolved, problem in control of overconstrained systems. It is being worked on it, but in the meantime Pluto is nearly immobile. When the difficulty with Pluto became apparent, a simple robot, Neptune, was built to carry on the long range vision work. Built like a tricycle, powered and controlled (by a large computer) through a tether and seeing through a pair of television cameras it is now able to cross a cluttered room in under an hour, five times more quickly than the Cart. Neptune has also been used to navigate by fuzzy room maps inferred from measurements made by a ring of 24 wide angle sonar ranging devices. This sonar method is not as precise as vision, but requires about ten times less computation. Another CMU project uses a narrow angle scanning sonar sensor to guide a small mobile robot. Built on a commercial robot chassis (from Heath Company) and dubbed the IMP, for Intelligent Mobile Platform, it builds and navigates by line descriptions of its surroundings. Its method is even faster than Neptune's, but breaks down when the surroundings become too complex. Uranus is a new robot in gestation at CMU, designed to do well the things that Pluto has so far failed to do. It will achieve omnidirectionality through curious wheels, tired with rollers at 45 degrees, that, mounted like four wagon wheels, can travel forward and backward normally, but that screw themselves sideways when wheels on opposite sides of the robot are turned in opposite directions. Yet another mobile robot at CMU is called the Terragator, for terrestrial navigator, and is designed to travel outdoors for long distances. It is much bigger than the others, almost as large as a small car, and is powered by a gasoline generator rather than batteries. It is designed for long outdoor trips, and has so far travelled short distances along roads, visually tracking the road edges to stay on course. It should in time avoid and recognize outdoor obstacles and landmarks. The earlier work makes clear that in order to run at the reasonable speeds (a few km/hr) it will need computer speeds about 100 times faster than its medium size mainframes now provide. The regular machines will be augmented with a specialized computer called an array processor to achieve these rates. Throughout the 1970s the Jet Propulsion Laboratory of the California Institute of Technology conducted research with a tethered mobile robot, called the RRV (for Robotic Research Vehicle), equipped with a laser rangfinder, a stereo pair of TV cameras and a robotic arm. The intent was to develop a system that would permit a robot on the surface of Mars to travel moderate distances without human supervision (which requires control from Earth through a half hour round trip delay time). The project demonstrated programs that could locate rocks in front of the vehicle, drive around them, and pick them up. The work was suspended in 1978 when funding for a mission to follow on the Viking Mars landings was cancelled. @subheading(Walking Machines) About seventy percent of the earth's land surface is inaccessible to existing vehicles. Much of this can be visited on horseback. The problem is one of footing. Wheels were a wonderful invention, but a wheel is only half of a system - it requires smooth ground to work well. In comparing wheels with legs, one can note that a wheel makes no attempt to control its points of contact with the ground - it puts load on each section of ground on its path. A legged organism, on the other hand, can choose its points of contact, and adjust for irregularities in the terrain. On smooth, hard surfaces wheels are the most efficient form of locomotion. On more typical rough, natural terrain wheels can be arbitrarily inefficient, and the contest goes to legs. Many toys that seem to walk have been invented, but these move their limbs in an oblivious way, and do not exploit the potential advantages of controllable interactions with the ground. To really walk requires a high order of knowledge of the immediate environment. Machines that walk in this latter sense have existed only since the availability of computers to manage the massive measurement and decision making processes needed. A pioneer walking machine was the Quadruped Transporter or walking truck demonstrated by a research group in the General Electric company in 1968. Resemblng an aluminum elephant, it was powered by a gasoline engine driving a hydraulic compressor, and was controlled by a human strapped into an onboard assembly that amplified the motion of his arms and legs into the motion of the four legs of the vehicle. It was difficult to control, but at its best provided some very impressive demonstrations (building a platform then climbing on it, moving a truck, not crushing an egg). It encouraged much later work in the field of more automatic legged machinery. In the mid 1970s a group at the University of Moscow demonstrated control algorithms for six-legged walking machines that chose footholds given an incremental map of the forward terrain. The work proceeded to the construction of small tethered working models. Several small robot walkers have been constructed by Japanese researchers. One of the most successful was built at the Tokyo Institute of Technology. It looks like a breadbox-sized four-legged spider. Each leg is a pantographic mechanism driven by three motors in the body, and it is controlled and powered through a tether. It walks slowly and senses its environment only through contact switches in its footpads. It is able to climb stairs and over obstacles and rough terrain by raising its foot until it no longer encounters a barrier, and traverses depressions by lowering a foot until it contacts the ground. The longest continuous line of walking machine research has been conducted since the late 1960s at Ohio State University. The OSU group has studied four-legged locomotion, but has worked primarily with six-legged machines. They have addressed problems of gait planning, force co-ordination, static and dynamic stability, foothold selection and a constellation of related problems. Their testbed for most of this work has been a six legged tethered electric vehicle about the size of a desk but half as high, called the OSU Hexapod. The group is now working under a large DARPA contract to build a much larger machine, dubbed the Adaptive Suspension Vehicle, or ASV, designed to traverse rough outdoor terrain. The ASV will look like a six-legged elephant, and will carry one passenger in the front of its body. Its legs will be hydraulically actuated, with power coming from an onboard gasoline engine. DARPA hopes to merge a vehicle resulting from this line of development with the visual navigation and planning techniques they also support, for a vehicle that can automatically skulk around in rough and hostile terrain. DARPA also supports a project at Carnegie-Mellon University that is studying the problem of dynamic stability, or balance. The testbeds for this work have been a pair of one-legged hopping machines, like pogo sticks, powered by compressed air and small hydraulic actuators, that hop competently and a new machine with four legs that trots and will execute other gaits. The work concerns itself with the problems of hopping from place to place and balancing on flat ground. The vision is of multilegged machines that gallop and leap instead of just crawl. The walking machine closest to commercial application was built by an engineering team at Odetics Inc., a California company best known for manufacture of tape recorders for spacecraft. The Odex-1 stands two meters tall, more or less depending on the configuration of its six spider-like legs. Each leg is controlled by three electric motors, two driving screws that control the leg's extension and height, and a third to swing the leg forward and back. With a few key positioning commands from a human via a radio remote controller, programs on computers onboard this impressive machine can cause it to climb out of the truck in which it is delivered then walk about a room. It is strong enough to lift and drag its delivery truck, and dexterous enough to pick up, place and then stand on a hatbox sized platform. Odetics hopes in time to produce a walking machine suitable for a wide range of applications. @subheading(Perception and Thought for Mobile Robots) The significance of mobile robot research may be much greater than the sum of its applications. There is a parallel between the evolution of intelligent living organisms and the development of robots. Many of the real-world constraints that shaped life by favoring one kind of change over another in the contest for survival also affect the viability of robot characteristics. To a large extent the incremental paths of development pioneered by living things are being followed by their technological imitators. Given this, there are lessons to be learned from the diversity of life. One is that mobile organisms tend to evolve in the direction of general intelligence, immobile ones do not. The plants are an example of the latter case, vertebrates and example of the former. An especially dramatic contrast is provided in an invertebrate phylum, the molluscs. Most are shellfish like clams and oysters that move little and have tiny nervous systems and behaviors more like plants than like animals. Yet they have relatives, the cephalopods, like octopus and squid, that are mobile and have independently developed many of the characteristics of vertebrates, including imaging eyes, large nervous systems and very interesting behaviour, including major problem solving abilities. The twenty year old modern robotics effort can hardly hope to rival the billion year history of large life on earth in richness of example or profundity of result. Nevertheless the evolutionary pressures that shaped life are already palpable in the robotics labs. The following is a thought experiment that we hope soon to make into a physical one. We desire robots able to execute general tasks such as "go down the hall to the third door, go in, look for a cup and bring it back". This desire has created a pressing need - a computer language in which to concisely specify complex tasks for a rover, and a hardware and software system to embody it. Sequential control languages successfully used with industrial manipulators might seemed a good starting point. Paper attempts at defining the structures and primitives required for the mobile application revealed that the linear control structure of these state-of-the-art arm languages was inadequate for a rover. The essential difference is that a rover, in its wanderings, is regularly "surprised" by events it cannot anticipate, but with which it must deal. This requires that contingency routines be activated in arbitrary order, and run concurrently. One answer is a structure where a number of specialist programs communicating via a common data structure called a blackboard are active at the same time, some operating sensors, some controlling effectors, some integrating the results of other modules, and some providing overall direction. As conditions change the priority of the various modules changes, and control may be passed from one to another. @heading(The Psychology of Mobile Robots) Suppose we ask our future robot, equipped with a controller based on the blackboard system mentioned in the last section to, in fact, go down the hall to the third door, go in, look for a cup and bring it back. This will be implemented as a process that looks very much like a program written for the arm control languages (that in turn look very much like Algol, or Basic), except that the door recognizer routine would probably be activated separately. Consider the following caricature of such a program. @begin(format) @begin(b) MODULE @p(Go-Fetch-Cup) Wake up @p(Door-Recognizer) with instructions ( On @p(Finding-Door) Add 1 to @p(Door-Number) Record @p(Door-Location) ) Record @p(Start-Location) Set @p(Door-Number) to 0 While @p(Door-Number) < 3 @p(Wall-Follow) @p(Face-Door) IF @p(Door-Open) THEN @p(Go-Through-Opening) ELSE @p(Open-Door-and-Go-Through) Set @p(Cup-Location) to result of @p(Look-for-Cup) Travel to @p(Cup-Location) @p(Pickup-Cup) at @p(Cup-Location) Travel to @p(Door-Location) @p(Face-Door) IF @p(Door-Open) THEN @p(Go-Through-Opening) ELSE @p(Open-Door-and-Go-Through) Travel to @p(Start-Location) End @end(b) @end(format) So far so good. We activate our program and the robot obediently begins to trundle down the hall counting doors. It correctly recognizes the first one. The second door, unfortunately is decorated with some garish posters, and the lighting in that part of the corridor is poor, and our experimental door recognizer fails to detect it. The wall follower, however, continues to operate properly and the robot continues on down the hall, its door count short by one. It recognizes door 3, the one we had asked it to go through, but thinks it is only the second, so continues. The next door is recognized correctly, and is open. The program, thinking it is the third one, faces it and proceeds to go through. This fourth door, sadly, leads to the stairwell, and the poor robot, unequipped to travel on stairs, is in mortal danger. Fortunately there is a process in our concurrent programming system called @p(Detect-Cliff) that is always running and that checks ground position data posted on the blackboard by the vision processes and also requests sonar and infrared proximity checks on the ground. It combines these, perhaps with an a-priori expectation of finding a cliff set high when operating in dangerous areas, to produce a number that indicates the likelyhood there is a drop-off in the neighborhood. A companion process @p(Deal-with-Cliff) also running continuously, but with low priority, regularly checks this number, and adjusts its own priority on the basis of it. When the cliff probability variable becomes high enough the priority of @p(Deal-with-Cliff) will exceed the priority of the current process in control, @p(Go-Fetch-Cup) in our example, and @p(Deal-with-Cliff) takes over control of the robot. A properly written @p(Deal-with-Cliff) will then proceed to stop or greatly slow down the movement of the robot, to increase the frequency of sensor measurements of the cliff, and to slowly back away from it when it has been reliably identified and located. Now there's a curious thing about this sequence of actions. A person seeing them, not knowing about the internal mechanisms of the robot might offer the interpretation "First the robot was determined to go through the door, but then it noticed the stairs and became so frightened and preoccupied it forgot all about what it had been doing". Knowing what we do about what really happened in the robot we might be tempted to berate this poor person for using such sloppy anthropomorphic concepts as determinination, fear, preoccupation and forgetfulness in describing the actions of a machine. We could berate the person, but it would be wrong. The robot came by the emotions and foibles indicated as honestly as any living animal - the observed behavior is the correct course of action for a being operating with uncertain data in an dangerous and uncertain world. An octopus in pursuit of a meal can be diverted by hints of danger in just the way the robot was. An octopus also happens to have a nervous system that evolved entirely independently of our own vertebrate version. Yet most of us feel no qualms about ascribing concepts like passion, pleasure, fear and pain to the actions of the animal. We have in the behavior of the vertebrate, the mollusc and the robot a case of convergent evolution. The needs of the mobile way of life have conspired in all three instances to create an entity that has modes of operation for different circumstances, and that changes quickly from mode to mode on the basis of uncertain and noisy data prone to misinterpretation. As the complexity of the mobile robots increases their similarity to animals and humans will become even greater. Among the natural traits in the immediate roving robot horizon is parameter adjustment learning. A precision mechanical arm in a rigid environment can usually have its kinematic self-model and its dynamic control parameters adjusted once permanently. A mobile robot bouncing around in the muddy world is likely to continuously suffer insults like dirt buildup, tire wear, frame bends and small mounting bracket slips that mess up accurate a-priori models. Existing visual obstacle course software, for instance, has a camera calibration phase where the robot is parked precisely in front of an exact grid of spots so that a program can determine a function that corrects for distortions in the camera optics. This allows other programs to make precise visual angle measurements in spite of distortions in the cameras. The present code is very sensitive to mis-calibrations, and we are working on a method that will continuously calibrate the cameras just from the images perceived on normal trips through clutter. With such a procedure in place, a bump that slightly shifts one of the robot's cameras will no longer cause systematic errors in its navigation. Animals seem to tune most of their nervous systems with processes of this kind, and such accomodation may be a precursor to more general kinds of learning. Perhaps more controversially, the begininnings of self awareness can be seen in the robots. All of the control programs of the more advanced mobile robots have internal representations, at varying levels of abstraction and precision, of the world around the robot, and of the robot's position within that world. The motion planners work with these world models in considering alternative future actions for the robot. If the programs had verbal interfaces one could ask questions that receive answers such as "I turned right because I didn't think I could fit through the opening on the left ". As it is the same information is often presented in the form of pictures drawn by the programs. @heading(So What's Missing?) There may seem to be a contradiction in the various figures on the speed of computers. Once billed as "Giant Brains" computers can do some things, like arithmetic, millions of times faster than human beings. "Expert systems" doing qualitative reasoning in narrow problem solving areas run on these computers approximately at human speed. Yet it takes such a computer an hour to visually guide a robot across a large room. How can such numbers be reconciled? The human evolutionary record provides the clue. While our sensory and muscle control systems have been in development for a billion years, and common sense reasoning has been honed for probably about a million, really high level, deep, thinking is little more than a parlor trick, culturally developed over a few thousand years, which a few humans, operating largely against their natures, can learn. As with Samuel Johnson's dancing dog, what is amazing is not how well it is done, but that it is done at all. Computers can challenge humans in intellectual areas, where humans perform inefficiently, because they can be programmed to carry on much less wastefully. An extreme example is arithmetic, a function learned by humans with great difficulty, which is instinctive to computers. These days an average computer can add a million large numbers in a second, which is more than a million times faster than a person, and with no errors. Yet one hundred millionth of the neurons in a human brain, if reorganized into an adder using switching logic design principles, could sum a thousand numbers per second. If the whole brain were organized this way it could do sums one hundred thousand times faster than the computer. Computers do not challenge humans in perceptual and control areas because these billion year old functions are carried out by large fractions of the nervous system operating as efficiently as the hypothetical neuron adder above. Present day computers, however efficiently programmed, are simply too puny to keep up. Evidence comes from the most extensive piece of reverse engineering yet done on the vertebrate brain, the functional decoding of some of the visual system by D. H. Hubel, T. N. Weisel and colleagues. The vertebrate retina's 20 million neurons take signals from a million light sensors and combine them in a series of simple operations to detect things like edges, curvature and motion. Then image thus processed goes on to the much bigger visual cortex in the brain. Assuming the visual cortex does as much computing for its size as the retina, we can estimate the total capability of the system. The optic nerve has a million signal carrying fibers and the optical cortex is a thousand times deeper than the neurons which do a basic retinal operation. The eye can process ten images a second, so the cortex handles the equivalent of 10,000 simple retinal operations a second, or 3 million an hour. An efficient program running on a typical computer can do the equivalent work of a retinal operation in about two minutes, for a rate of 30 per hour. Thus seeing programs on present day computers seem to be 100,000 times slower than vertebrate vision. The whole brain is about ten times larger than the visual system, so it should be possible to write real-time human equivalent programs for a machine one million times more powerful than todays medium sized computer. Even todays largest supercomputers are about 1000 times slower than this desiratum. How long before our research medium is rich enough for full intelligence? Since the 1950s computers have gained a factor of 1000 in speed per constant dollar every decade. There are enough developments in the technological pipeline, and certainly enough will, to continue this pace for the forseeable future. The processing power available to AI programs has not increased proportionately. Hardware speedups and budget increases have been dissipated on convenience features; operating systems, time sharing, high level languages, compilers, graphics, editors, mail systems, networking, personal machines, etc. and have been spread more thinly over ever greater numbers of users. I believe this hiatus in the growth of processing power explains the disappointing pace of AI in the past 15 years, but nevertheless represents a good investment. Now that basic computing facilities are widely available, and thanks largely to the initiative of the instigators of the Japanese Supercomputer and Fifth Generation Computer projects, attention worldwide is focusing on the problem of processing power for AI. The new interest in crunch power should insure that AI programs share in the thousandfold per decade increase from now on. This puts the time for human equivalence at twenty years. The smallest vertebrates, shrews and hummingbirds, derive interesting behavior from nervous systems one ten thousandth the size of a human's, so we can expect fair motor and perceptual competence in less than a decade. Present robot programs are similar in power to the control systems of insects. Some principals in the Japanese Fifth Generation Computer Project have been quoted as planning "man capable" systems in ten years. I believe this more optimistic projection is unlikely, but not impossible. The fastest present and nascent computers, notably the Cray X-MP and the Cray 2, compute at 109 operations/second, only 1000 times too slowly. As the computers become more powerful and as research in this area becomes more widespread the rate of visible progress should accelerate. I think artificial intelligence via the "bottom up" approach of technological recapitulation of the evolution of mobile animals is the surest bet because the existence of independently evolved intelligent nervous systems indicates that there is an incremental route to intelligence. It is also possible, of course, that the more traditional "top down" approach will achieve its goals, growing from the narrow problem solvers of today into the much harder areas of learning, common-sense reasoning and perceptual acquisition of knowledge as computers become large and powerful enough, and the techniques are mastered. Most likely both approaches will make enough progress that they can effectively meet somewhere in the middle, for a grand synthesis into a true artificial sentience.