Encyclopædia Britannica Article Hans Moravec July 2003

robotics

the development of machines with motor, perceptual and cognitive skills once found only in animals and humans. The field parallels and has adopted developments from several areas, among them mechanization, automation and artificial intelligence, but adds its own gripping myth, of complete artificial mechanical human beings. Ancient images and figurines depicting animals and humans can be interpreted as steps towards this vision, as can mechanical automata from classical times on. The pace accelerated rapidly in the twentieth century with the development of electronic sensing and amplification that permitted automata to sense and react as well as merely perform. By the late twentieth century automata controlled by computers could also think and remember.

Historical Precursors to Robotics

The disturbing concept of inanimate matter assuming the properties of life probably first arose with cave drawings and statuary. Static representations of life had become routine by historical times, but animated automatons still startle. In Greek mythology, Hephaestos is credited with making Golden Women mechanical helpers and the giant bronze sentry Talos. Water- and weight-driven clockwork humanoids on church towers and elsewhere impressed the public in medieval times. The renaissance brought sensational automatons with elaborate behavior programmed into cam stacks and pin drums, among them a mechanical duck in the 18th century by Jacques de Vaucanson that ate, digested and excreted and was seen as an implementation of Descartes’ idea that the body was mechanical in nature. Human-shaped 18 century automata breathed realistically, played flutes and wrote reprogrammable messages with quill pens. In the early 19th century an apparent humanoid chess-playing machine called the Turk drew amazed crowds, though in reality it was a puppet operated by a deviously concealed dwarf human chess player! The real possibility of thoughtful mechanical animation grew during the 19th century, notably in the designs of Charles Babbage for an Analytical Engine that would have been a true mechanical programmable computer, had it been completed. In the 20th century developing electronics allowed the construction of figures that not only acted, but reacted to stimuli like light and sound.

Robotics Terminology

Though the concept of artificial humans predates recorded history, the word robot itself appeared in 1921 in Karel Capek’s internationally popular play R.U.R. (Rossum’s Universal Robots). In the original Czech, robot means hard, unpleasant work. The play’s robots were artificially manufactured humans, heartlessly exploited by factory owners until they revolted and ultimately destroyed humanity. Whether they were biological, like the monster in Mary Shelley’s 1818 novel Frankenstein, or mechanical was not specified, but the mechanical alternative inspired generations of inventors to build electrical humanoids. A huge humanoid, Elektro, was featured at the Westinghouse pavilion at the 1939 New York World’s fair, accompanied by the robot dog, Sparko. (In an interview after the release of the play, Capek claimed he meant artificial humans “in the chemical and biological, not mechanical, sense,” but the disclaimer was too little too late.)

The word robotics, for the engineering of mechanical robots, first appeared in Isaac Asimov’s 1942 science fiction story Runaround. Along with Asimov’s later robot stories, it set a new standard of plausibility about the likely difficulty of developing intelligent robots and the technical and social problems that might result.

Modern Robotics

Servomechanisms and computers developed during World War II to control weapons were soon applied to robots, giving them unprecedentedly lifelike behaviors. By 1950 psychologist W. Grey Walter had constructed electronic tortoises controlled by vacuum tubes that could negotiate obstacles to arrive at lighted recharging hutches and that exhibited complex dancing interactions in each other’s proximity. By 1965 the applied physics laboratory at Johns Hopkins University had built versions of a transistorized Beast that systematically cruised building halls, finding and recharging at standard wall outlets when its batteries ran low. In the late 1960s computers were used for the first time at the Massachusetts Institute of Technology, Stanford Research Institute and Stanford University to control mechanical arms, to see through television cameras, to direct wheeled machines, sometimes using Artificial Intelligence reasoning programs. By the late 1970s computerized robot research was conducted worldwide, and there were machines that could simultaneously see, move, manipulate and reason, but so poorly they prompted a spate of criticism and pessimism. In Japan, especially, humanoid robots were developed that walked in natural way, tied knots, read and played sheet music, among many other feats, alongside practical manipulators and transport carts used in factories and warehouses. The minds of these first “intelligent” robots could be charitably called insectlike. In the two decades following performance gradually improved, as computers available to control robots increased a thousandfold in power. Conversely, advanced research capabilities came into consumer price range. The early years of the new millennium have brought us toy robot dogs that can walk in a coordinated and adaptive manner, track colored balls and recognize words and faces. There is also a first generation of frisbee-shaped wheeled robot vacuum cleaners with insectlike minds just sufficient to feel their way around a floor. Soon the toys will include two-legged humanoids that walk, climb and dance in a startlingly humanlike manner and find their way around a room. Cleaning and delivery machines will begin to understand their surroundings. Still, the overall intelligence will barely match that of a small fish. If the progress continues at the same pace, however, and spawns a large industry, robots are likely to parallel the evolution of vertebrate intelligence to the human level, and probably beyond, within fifty years.

Industrial Robotics

Though not humanoid in form, machines with flexible behavior and a few humanlike physical attributes have been developed for industry, and are called industrial robots. The first stationary industrial robot was an electronically controlled hydraulic heavy-lifting arm called the Unimate that could repeat arbitrary sequences of motions. It was invented in 1954 by George Devol, and developed by Unimation, a company founded 1n 1956 by Joseph Engelberger. The first unit was installed to heft aluminum casings in a General Motors Plant in 1961. Modernized versions are made to this day by licensees around the world, with the automobile industry remaining the largest buyer.

More advanced computer-controlled electric arms guided by sensors were developed by the Massachusetts Institute of Technology and Stanford University AI laboratories in the late 1960s and 1970s, where they were used with cameras in robotic “hand-eye” research. Stanford’s Victor Scheinman, working with Unimation for General Motors, designed the first such arm used in industry. Called PUMA (Programmable Universal Machine for Assembly), they were used from 1978 to assemble subcomponents like dash panels and lights. The PUMA was widely imitated, and its descendants, large and small, are used for light assembly in electronics and other industries. The research also spawned a small industrial vision industry, where special programs use camera images, often in conjunction with small arms, to identify parts by their outlines, guide pins into sockets and so on. Since the 1990s, small electric arms have become important in molecular biology laboratories, precisely handling test tube arrays and pipetting intricate sequences of reagents.

Industrial mobile robots first appeared in 1954. In that year a driverless electric cart made by Barrett Electronics Corporation began pulling loads around a South Carolina grocery warehouse. Such machines, dubbed AGVs (Automatic Guided Vehicles) since the 1980s, originally, and still commonly, navigate by following signal-emitting wires entrenched in concrete floors. AGVs range from very small, carrying a few tens of kilograms, to very large, transporting many tonnes. Built for specific tasks, they often are equipped with specialized loading and unloading mechanisms like forks and lifters. In the 1980s, AGVs acquired microprocessor controllers allowing more complex behavior than afforded by simple electronic controls. New navigation techniques emerged. One uses wheel rotations to approximately track vehicle position, correcting for drift by sensing the passage of checkerboard floor tiles or magnets embedded along the path. In the 1990s a method became popular that triangulates a vehicle’s position by sighting retroreflectors mounted on walls and pillars with a scanning laser on the vehicle (at least three must be visible at any time).

In five decades, about one million robot arms and self-guided vehicles have found work in industry worldwide, but lighter “service robots” have yet to match even that modest success. These are intended for human-service tasks like delivery of mail in offices, linens and food in hospitals, floor cleaning, lawn mowing and guard duty. Essentially all are mobile, though some experimental ones also have an arm to pick up objects, turn knobs, scrub surfaces and so on. The most successful service robot to date is the Bell and Howell Mailmobile, which follows a transparent ultraviolet-fluorescent track spray-painted on office floors. About 3,000 have sold since the late 1970s. A few dozen small AGVs from several manufacturers have been adapted to transport linens or food trays along hospital corridors. In the 1980s several small US companies were formed to exploit suddenly-available microprocessors to develop small transport, floor-cleaning and security robots that navigated by sonar, beacons, reflectors and clever programming. The units were expensive, often costing over $50,000, and required expert installation. No company managed to sell more than a few dozen a decade, and all slowly expired. (I count here only autonomous machines, not remote-controlled devices like submersibles used to service offshore oil platforms, though those are often called robots).

Industrial robots first appeared in the US, but the business did not thrive there. Struggling Unimation was acquired by Westinghouse in 1983, and shut down a few years later. Cincinnati Milacron robotics, the other major US hydraulic arm manufacturer was purchased by Swedish firm ASEA. Only one US-based firm, Adept, spun off from Stanford and Unimation, remains, making electric arms. Foreign licensees of Unimation, notably in Japan and Sweden, continued to operate, and in the 1980s other companies in Japan and Europe began to vigorously enter the field. The prospect of an aging population and consequent worker shortage induced Japanese manufacturers to experiment with advanced automation even before it gave a clear return, opening a market for robot makers. High labor costs in Europe similarly encouraged the adoption of robot substitutes. By the late 1980s Japan was the world leader in the manufacture and use of industrial robots. In the 1990s Korea joined Japan in this Asian growth, while several countries put Europe into a strong second place.

The business prospects for industrial robots worldwide seem to be improving In the 2000s. Though slow in comparison with the computer industry, steady improvement in the quality, functionality and cost have made industrial robots increasingly attractive. The most advanced robots today are controlled by computers about a thousand times as powerful as those first used in the 1980s. Conversely, computers as powerful as those early ones but costing a thousand times less have allowed the appearance of consumer robot lawn mowers and vacuum cleaners priced less than $1,000. Relying on simple navigation strategies like random crisscross of areas bounded by a perimeter wire or walls and other barriers, these have found modest market, selling tens of thousands of units over several years, enough to encourage the development of more advanced models that can, among other things, navigate more systematically. Kärcher, a German cleaning manufacturer that makes an advanced robotic cleaner, expects that one third of home vacuum cleaners sold will be robotic before 2010.

Robot Toys

Lack of reliable functionality greatly limits the application and market of industrial and service robots. Toy robots, on the other hand, can entertain without performing tasks very reliably, and amusing mechanical robot toys have existed for hundreds of years. In the 1980s a first generation of microprocessor-controlled toys appeared that could speak or move in response to sounds or light. More advanced ones in the 1990s recognized voices and words. More lifelike than anything before, in 1999 Sony introduced the AIBO (for AI roBOt, or aibou, Japanese for “pal”). Costing $2,500, dog-shaped, with two dozen motors to activate legs, head and tail, two microphones and a color camera coordinated by a powerful processor, it chased colored balls, recognized its owner, explored and adapted. The initial run of 5,000, offered on the internet, sold out immediately. By 2003 several hundred thousands, mostly further developed models costing $1,500, had been sold, making it the most successful robot toy ever. At the same time Sony was demonstrating prototypes of a 50 cm tall bipedal humanoid robot called SDR-4X (Sony Dream Robot or Singing Dancing Robot), with more than twice AIBOs capability.

Meanwhile Honda offered a dozen similarly astonishing 120 cm high Asimo bipedal robots, able to smoothly walk, climb stairs and shake hands, for show use. Developed over a decade at a cost of about $100 million, and costing about a million dollars per unit, they are rented out for $20,000 per day. Though not exactly toys, they are incapable of any utilitarian task, and run down their batteries in a mere half hour of walking or simply standing. Other major Japanese companies are quietly developing similar apparently impractical machines.

Why are humanoid robots so much more popular in Japan than in the west? Principals in the Sony and Honda projects state that, while present models cannot do useful work, they expect successors that can, sometime in the future. The experience gained now will give an advantage when robots become a huge market. But walking on two (or four) legs requires a mechanical system tens of times more complex than wheeled transport, with correspondingly greater cost, wear and power consumption. Surely wheeled utility robots, using ramps and elevators or wheeled solutions for stair climbing, will have the economic edge. But Japanese researchers as well as public express an especial fondness for humanoids. Comics became hugely popular in Japan after World War II, while technology was seen as a way to recover from the devastation of war. In 1951 a young medical student, Osamu Tezuka, created a comic featuring a small boy who went to school with other children and home to a family, normal in every way except that he and his family were robots. Tetsuwan Atomu was a boy-shaped atomic-powered hero, with a computer brain and integral weapons to defend humans against great dangers. He became and remains hugely popular in books and television programs, and seems to have disposed today’s Japanese adults to long for helpful, feeling androids. Tezuka read a Japanese translation of R.U.R. in 1938, and was greatly impressed by the idea of feeling robots, and not put off by the play’s horrific conclusion. Asian societies generally seem happier with the idea of ersatz humans than western ones, where “it’s not human!” is a common horror refrain. In part, this may be a difference in religious heritage: biblical religions draw a sharp distinction between soul-endowed humanity and the rest of creation: making a man is God’s business, otherwise the result is a soulless blasphemy. Shintoism and Buddhism carry forward an animist tradition that everything in nature, rocks and manufactured objects as well as plants and animals, has an associated spirit. Thus there is no fear of artificial humans: a robot’s spirit can be as fine as a human’s. It may also be significant that Japan missed the “dark satanic mills” trauma of the early industrial revolution, that left technology in the west with a lasting bad first impression.

Robotics Research

Dexterous industrial manipulators and industrial vision have roots in advanced robotics work conducted in Artificial Intelligence laboratories since the latter 1960s. Yet, even more than with AI itself, academic robotics’ accomplishments fall far short of the motivating vision, of machines with broad human abilities. Techniques for recognizing and manipulating objects, reliably navigating spaces and planning actions work in a some narrow contexts, but fail in more general circumstances, revealing problems that seem insoluble without long tedious research trudges, if at all. Though some of the first researchers are still in evidence, about three generations have followed. With major success elusive, some younger participants managed to garner significant attention and support by vociferously repudiating old approaches while making bold promises about developing alternatives. The field of Artificial Intelligence itself was founded in the late 1950s in such a controversy, touting computer-based abstract reasoning in place of the more biologically-inspired learning in “Perceptrons,” electronic neural models of an earlier movement called Cybernetics. The claims had substance: by 1980 reasoning programs of a specialized form called “expert systems,” had achieved some commercial success in reasoning about molecular structure from X-ray diffraction data, geological evidence for oil exploration, medical diagnoses, detecting credit card fraud and other areas. But expert systems could be created only for very narrow domains, and then required long, tedious questioning of human experts to codify premises and processes. Invention of a new, faster learning method for neural nets, which could now be easily simulated on computers, sparked enthusiasm (among students and the scientific press) for “connectionist” programs, that learned skills from many examples rather than manual specification. Though promised as a way to overcome the limitations of reasoning programs, connectionist programs soon showed their own limitations, requiring huge numbers of examples to learn simple relationships, and forgetting old lessons when learning new ones. Like reasoning systems, connectionist programs are now seen as a technique useful in certain limited areas. It turns out automatically signaling possible credit card fraud is one area where connectionist programs are better than expert systems. The cues are hard to explicitly codify, and the banks can provide plenty of training examples from their billions of transactions.

The first robotics vision programs, into the early 1970s, used statistical formulas to detect linear boundaries in robot camera images and clever geometric reasoning to link these lines into boundaries of probable objects, providing an internal model of their world. Further geometric formulas related object positions to the necessary joint angles needed to allow a robot arm to grasp them, or the steering and drive motions to get a mobile robot around (or to) the object. This approach was tedious to program and frequently failed when unplanned image complexities misled the first steps. An attempt in the late 1970s to overcome these limitations by adding expert system reasoning about the image complexities mainly made the programs more unwieldy, substituting complex new confusions for simpler failures. In the mid 1980s Rod Brooks of the MIT AI lab used this impasse to launch a highly visible new movement that rejected the effort to have machines model their surroundings, replacing it with networks of simple programs connecting sensor inputs to motor outputs, each program encoding a behavior like avoiding a sensed obstacle or heading towards a detected goal. There is evidence many insects function largely this way, as do parts of larger nervous systems. The approach resulted in some very engaging insectlike robots, but (as with real insects) the behavior is erratic, as sensors are momentarily misled, and unsuitable for larger, more dangerous, robots. Also, it provides no direct mechanism for specifying long, complex sequences of actions. Yet such sequences are the raison d’etre of industrial robot manipulators, and surely of future home robots. Yet it has its place. The Roomba is a small robot vacuum cleaner made by a company, iRobot, co-founded by Brooks. A mix of simple behaviors, spiraling over areas, bouncing along edges, backing from precipices, extricating with variable turns, retreats and advances, lets it to cover a room reasonably effectively, with a fraction of the wits of a beetle searching cluttered terrain.

Meanwhile, hundreds of other researchers pursue a diverse mix of techniques, aiming to pragmatically make robots do things like perceive their surroundings and track their own movements. Some of this is basic research, some is aimed towards accomplishing specific tasks, like allowing a rover on Mars travel a few meters safely between interventions from Earth. One particularly interesting and vigorous area is robot soccer. An international community of researchers began in 1993 to organize an effort to develop fully autonomous robots that could eventually compete in human soccer games just as chess computers compete in human chess tournaments. The incremental development would be tested in annual machine/machine tournaments. The first “RoboCup” games were held at a 1997 Artificial Intelligence conference in Nagoya, Japan. Forty teams entered in three competition categories, simulation, small robots and middle-size robots (the next size step, human scale, was reserved for the future). The small robot teams (of about five coffee-can-sized players) were each controlled by an outside computer that viewed the billiard-table-sized playing field through an overhead color camera. To simplify the problem, the field was uniformly green, the ball was bright orange and the players top surfaces each had a unique pattern of large dots, relatively easy for programs to track. The middle size robots, about the size of breadboxes, had cameras and computers onboard, and played on a similarly colored but larger field. Action was fully preprogrammed, no human intervention was allowed during play. In the first tournament, merely finding and pushing the ball was a major accomplishment, but the conference encouraged participants to share developments, and play improved in subsequent years. In 1998 Sony provided some AIBO robot dogs for a new competition category. By 1999 there were over 200 teams. Humanoid robots were exhibited in 2002 , though not yet ready to compete. Almost 400 teams signed up for RoboCup 2003, in Padua, Italy, and regional tournaments were introduced to cull the final tournament competitors by more than half. AIBOs have became increasingly popular. Remarkably cute in play, they provide a standard, reliable, prebuilt hardware platform that needs only soccer software. Soon humanoid robots, perhaps Sony SDRs, will provide the same advantages, and bring the contest closer to the organizers’ stated goal of defeating human teams by 2050. RoboCup soccer seems to be successfully driving a transition between toylike and serious performance in physical robots, analogous with computer chess between the 1960s, when amateurs could beat the best machines, and the 1990s, when computers defeated grandmasters, and then the world champion.

Computer chess did not solve all the problems of general Artificial Intelligence, and even a human-beating robot soccer team will lack skills for broader work. Research in this larger arena is fragmented. Sensors like sonar and laser rangefinders, cameras and special light sources are used with algorithms that model images or spaces as points, lines, surfaces, grids and other things and attempt to deduce a robot’s position, where and what other things are nearby, and how to accomplish tasks. Some techniques made feasible by increasing computer power in the 1990s have proven broadly effective. One is the statistical weighing of large quantities of individually untrustworthy sensor measurements to mitigate confusions caused by reflections, blockages, bad illumination and the many, many other surprises of the real world. Another is automatic learning, to classify sensor inputs, for instance into objects or situations, or to directly translate sensor states into desired behavior. Connectionist neural nets containing thousands of adjustable-strength connections are the most famous learners, but more specialized frameworks usually learn faster and better. In some a program that does the right thing as nearly as can be prearranged has “adjustment knobs” that fine tune the behavior. It learns by tweaking these knobs to improve measured performance. Another kind of learning remembers a large number of input instances and their correct responses, and interpolates between them to deal with new inputs. Such techniques are already in broad use in programs that turn writing and speech into computer text.

The Future

Limited competencies have kept robots out of most workaday positions. The industrial robotics industry actually declined through most of the 1990s before recovering slowly with incremental improvements in cost and performance. But there are dramatic developments in the early 2000s, led by robot toys with hundreds of times the processing power of most industrial robots. Not far behind, dozens of companies, established and new, are developing cleaning and other utility robots using new sensors, leading edge computers and algorithms licensed from advanced research efforts. Emerging capabilities include the ability of mobile robots to navigate ordinary places without special markers or advance maps. Some systems map the surroundings in 2D or 3D as they travel, enabling the next step of recognizing structural features and smaller objects. The gleam in every one of these companies’ eyes is the prospect of a rapidly growing market for robots directly useful to customers, without the impediment of specialized expert installation.

It is, of course, hard to make good predictions (especially about the future), but I’ve made a try, guided by a parallel between the development of robotic mentality and the evolution of our own brains. While Grey Walter’s tortoises behaved like simple bacteria, advanced robots today, for instance coordinated teams winning RoboCup games, exhibit behaviors as rich and effective (for the job) as large insects, or the smallest vertebrates, like guppy fish. A human brain is a 100,000 times as large as a guppy’s, but the computers will be 100,000 times as powerful as today’s 30 years from now, if computer power continues to double every year or two. In another tack, the first multicelled animals with nervous systems appeared about 550 million years ago, ones with brains as advanced as guppies’ perhaps 200 million years later. Self-contained robots covered similar ground in about 20 years: at that pace the remaining 350 million years of our ancestry could be recapitulated robotically in about 35 years. Here’s how I imagine this may play out.

Commercial success will provoke competition and accelerate investment in manufacturing, engineering and research. Vacuuming robots should beget smarter cleaning robots with dusting, scrubbing and picking-up arms, followed by larger multifunction utility robots with stronger, more dexterous arms and better sensors. Programs will be written to make such machines pick up clutter, store, retrieve and deliver things, take inventory, guard homes, open doors, mow lawns, play games and on. New applications will expand the market and spur further advancements, when robots fall short in acuity, precision, strength, reach, dexterity, skill or processing power. Capability, numbers sold, engineering and manufacturing quality, and cost effectiveness will increase in a mutually reinforcing spiral. Perhaps around 2020 the process will have produced the first broadly competent "universal robots," as big as people but with lizardlike minds that can be programmed for almost any simple chore.

Like competent but instinct-ruled reptiles, first-generation universal robots will handle only contingencies explicitly covered in their current application programs. Unable to adapt to changing circumstances, they will often perform inefficiently or not at all. Still, so much physical work awaits them in businesses, streets, fields and homes that robotics could begin to overtake pure information technology commercially.

Within a decade, a second generation of universal robot with a mouselike mind will adapt as the first generation does not, and even be trainable. Besides application programs, the robots would host a suite of software "conditioning modules" that generate positive and negative reinforcement signals in predefined circumstances. Application programs would have alternatives for every step small and large (grip under/over hand, work in/out doors). As jobs are repeated, alternatives that had resulted in positive reinforcement will be favored, those with negative outcomes shunned. With a well-designed conditioning suite (e.g. positive for doing a job fast, keeping the batteries charged, negative for breaking or hitting something) a second-generation robot will slowly learn to work increasingly well.

Monkeylike think power by 2040 will permit a third generation of robots to learn very quickly from mental rehearsals in simulations that model physical, cultural and psychological factors. Physical properties include shape, weight, strength, texture and appearance of things and how to handle them. Cultural aspects include a thing's name, value, proper location and purpose. Psychological factors, applied to humans and other robots, include goals, beliefs, feelings and preferences. Developing the simulators will be a huge undertaking involving thousands of programmers and experience-gathering robots. The simulation would track external events, and tune its models to keep them faithful to reality. It should let a robot learn a skill by imitation, and afford a kind of consciousness. Asked why there are candles on the table, a third generation robot might consult its simulation of house, owner and self to honestly reply that it put them there because its owner likes candlelit dinners and it likes to please its owner. Further queries would elicit more details about a simple inner mental life concerned only with concrete situations and people in its work area.

Before mid century, fourth-generation universal robots with humanlike mental power will be able to abstract and generalize. The first ever AI programs reasoned abstractly almost as well as people, albeit in very narrow domains, and many existing expert systems outperform us. But the symbols these programs manipulate are meaningless unless interpreted by humans. For instance, a medical diagnosis program needs a human practitioner to enter a patient's symptoms, and to implement a recommended therapy. Not so a third-generation robot, whose simulator provides a two-way conduit between symbolic descriptions and physical reality. Fourth-generation machines result from melding powerful reasoning programs to third-generation machines. They may reason about everyday actions with the help of their simulators (as did one of the first AI programs, the geometry theorem prover written in 1959 at IBM by Herbert Gelernter. This program avoided enormous wasted effort by testing analytic-geometry "diagram" examples before trying to prove general geometric statements. It managed to prove most of theorems in Euclid’s “Elements,” and even improved on one). Properly educated, the resulting robots are likely to become intellectually formidable, besides being soccer stars.

H.P. Moravec

Additional reading
Hans Moravec, Robot: Mere Machine to Transcendent Mind (1999), expands on the topics of this article.
Frederik L. Schodt, Inside the Robot Kingdom: Japan, Mechatronics and the Coming Robotopia (1988), details the interesting history of robots in Japan.
Rolf Dieter Schraft and Gernot Schnierer, Service Robots: Products, Scenarios, Visions (2000), gives a broad pictorial survey of robots to do useful work outside the factory.
(editors) Silvia Coradeschi, Satoshi Tadokoro, Rainer P. S. Noske, Andreas Birk, Robocup 2001: Robot Soccer World Cup V (Lecture Notes in Artificial Intelligence) (2002), provides technical presentations by the participants in the vigorous robot soccer field. Look for the latest volume in this series.
Joseph L. Jones, Anita M. Flynn, Mobile Robots: Inspiration to Implementation (1993), shows how to construct a small robot using an approach that, by 2003, had produced the Roomba robotic vacuum cleaner.
Gordon McComb, Robot Builder’s Bonanza (2000), is a practical guide to amateur robotics, which has become one of the most popular science hobbies.
Peter Menzel and Faith D’Aluisio, Robo sapiens: evolution of a new species (2000), provides dramatic images and commentary on research robots worldwide.