Principal Investigator: Hans P. Moravec 1 Institution: Carnegie Mellon University Phone Number: (412) 268-3829 E-Mail: hpm@rover.ri.cmu.edu Contract Title: Autonomous Mobile Robots Contract Number: N00014-81-K-0503 Reporting Period: 1 Oct 89 - 30 Sep 90 Section 2 - Technical Summary In the last year we continued to develop, extend and experiment with our Certainty Grid. approach to interpreting mobile robot sensor data. We examined new approaches to learning sensor models, and we began work on a major extension that will allow the approach to more effectively accommodate readings from sonar sensors operating in highly specular environments. We also spent some effort improving the Uranus mobile robot, which has been the experimental platform for the sensor work. Background Since 1981 we have conducted basic research on autonomous robots. We've built three mobile robots, and worked with two others, and with them developed and improved new concepts in vision and sonar guided navigation, and robot control. These programs used a variety of approaches, some more successful than others. In the last three years we have concentrated on the most successful, a probabilistic representation of spatial knowledge we call Certainty Grids. Certainty Grids are a powerful and efficient unifying solution for interpretation of vision, sonar, laser and touch and other sensor data, for sensor fusion, motion planning, extended landmark identification, and many other central mobile robot problems. They provide a way of accumulating partial information from multiple noisy sources into clear images of the robot's surroundings. Our early experimentation was with descendants of a system we originally developed at Stanford University during the 1970s that drove a mobile robot known as the Stanford Cart [Moravec80]. This system used a stereo-vision derived spatial point-list representation of its surroundings. It was able to navigate and map complex cluttered rooms. However it suffered a brittlenessÑeven in its most polished incarnation it would become confused about its position in one room crossing in four. With this experience behind us we were delighted in 1984 when we achieved near 100% success in room crossings with a new approach we had invented to handle broad-beam sonar data [Moravec85, Elfes86, Kadonoff86]. In it the space around the robot is represented as a grid of cells, each containing a number giving the estimated probability that the area is occupied. Each sonar reading made by the robot lowers the probability values of an angular swath of space in front of the position of the sensor, and raises the probabilities of an arc (corresponding to the range surface) at the outer bound of the swath. By itself, a single reading give an unusably fuzzy image of the world. But several hundred readings taken from different points of view (both from different sensors on the robot, and from different locations as the robot moves) whose swaths overlap produce an increasingly detailed map of the surroundings when they are combined in the grid. In past years we have successfully used this method on data from Polaroid sonar transducers and also on range data extracted from binocular television images and a combination of the two. Recently we've replaced our initial ad-hoc statistical formulas with a comprehensive Bayesian formulation that produces better results. In the past year we experimented with learning methods for deriving the probability distributions that describe individual sensor readings. We've also begun to investigate a new approach that maintains several probabilities at each cell to better model situations where sonar is used in smooth-walled environments. In these the sound usually fails to return to the transducer when it encounters a wall at an oblique angle, instead bouncing away from the transducer as from a mirror. Sensor Model Learning In our early Certainty Grid work the probability distributions that describe individual sensor readings (called the sensor model) were constructed by hand, from ad hoc interpretations of our knowledge of the sensor function. We obtained good success with this risky approach when the data came from sonar readings made in our laboratory, whose walls were lined with shelves and other clutter, where almost every reading was an accurate indication of distance to a surface. It also worked well when used with range data from a stereo vision system, which contained only a small percentage of misleading ranges. In both these cases the resulting map was insensitive to the details of the sensor model. This was emphatically not true when we ran our robot with sonar transducers in an environment of smooth walls, where many measurements failed to give correct distances because the sound was bounced away specularly. We found that it was possible to produce better maps (as defined by a measure that gives the log probability that the reconstructed map is an approximation to a manually entered perfect map) in those difficult conditions by carefully tuning the parameters of the function that generated our sensor model distributions. Such parameter adjustment learning can be done with a modest amount of data, for instance the few hundred readings obtained in a single room crossing. We also investigated another learning technique that requires no a-priori sensor model function. In this approach, a very extensive set of readings (typically tens of thousands of range measurements) is taken at various known points in a known environment. Statistics are derived from this data set about the average occupancy of positions within the sensor's field of view for various (suitably quantized) range values. The resulting distributions are used directly as a sensor model. The method has the advantage that no assumptions about the nature of the sensor are made, eliminating the possibility that an over-confining model function prevents the correct distribution from being represented. It has several disadvantages. The first is that it requires much more training data than the model-based version, and so is much more expensive to derive, and even with tens of thousands of training samples, the resulting distributions still contain a great deal of sampling noise. A related problem is that the distribution is somewhat blurred by sampling noise, and so produces unnecessarily blurry maps when it is later used to build maps in later robot runs. A third, more serious, problem is that the learned distributions contain systematic sampling artifacts. For instance, in one of our examples the training was done in a 12 foot wide room. The sensor model distribution for a 10 foot range reading contains, as expected, a lowered probability depression in the sensor's direction of view, bounded by a raised probability ridge at the 10 foot distance. But there is also a slight ridge two feet behind the sensor, since in the 12 foot room there was often a wall two feet behind the sensor when one was detected ten feet in front. Also there are lowered probability hollows to the left and right of the sensor's viewing angle, reflecting the convexity of most of the surfaces detected in the training sample. These artefacts cause severe degradations when the sensor model is used to construct maps. Because of these limitations, we have abandoned the model-free approach, and have used the parameterized sensor-model learning method exclusively in our recent work. Our best current sonar model has nine parameters, representing the initial beam angle, depth of empty well, height of range ridge, thickness of range ridge, and constants expressing how these quantities change with distance from the sensor. Specular Surface Models By far the most serious problem in building maps from sonar data is that of specular reflections. In a highly specular environment, a surface may fail to be detected in nine out of ten rangings. This can be handled in our basic approach by making the well of lowered probabilities in front of the transducer in our sensor model distributions very shallow, saying, in effect, that failure to detect a reflector at a particular distance is not strong evidence of its absence. The shallow well then erases map regions only weakly when called up by a reading, and it takes many readings to adequately build up evidence of empty space. But the standard approach assumes that the "failure to detect" errors are statistically independent. This is grossly incorrectÑthe errors are highly systematic. A specular surface viewed at a perpendicular angle almost always gives a good return, while it usually fails to give a return when viewed at a sufficiently large oblique angle. We have experimented with, and are in the process of efficiently implementing, an approach that deals with this. Instead of a single probability per map cell, representing the hypothesis that the cell is simply occupied, the new approach maintains several (typically 8) probabilities per cell, each representing the hypothesis that the cell contains a reflector oriented in a particular small range of angles. The sensor model distributions are also correspondingly each split up into several distributions. Now in the empty well area of a sensor model distribution, the well can be made deep for the hypothesis that there is a patch of surface in each cell that is perpendicular to the line from the transducer, while being very shallow for the other hypotheses which talk about oblique angles. A specular surface imaged by this approach has all of its angle hypotheses raised to a high probability level when viewed perpendicularly (and thus detected) by a sensor reading. But oblique (non-detecting) readings lower only the probability corresponding to the angle that is perpendicular to the current line of sight. In time, when the point has been viewed from many angles, all but one of the angle hypotheses for the cell will be reduced to a low value. The remaining one, corresponding to the actual orientation of the surface, cannot be lowered this way, since it is correctly detected whenever it is perpendicular to the line of sight. In limited experiments thus far the method has greatly improved the maps we obtain in a narrow, specular hallway. References [Elfes89] Elfes, A., Using Occupancy Grids for Mobile Robot Perception and Navigation, IEEE Computer Magazine, special issue on Autonomous Intelligent Machines, June 1989, pp. 46-58. [Moravec89] Moravec, H.P. and Cho, D.W., A Bayesian Method for Certainty Grids, working notes of AAAI 1989 Spring Symposium Series, Symposium on Mobile Robots, Stanford, CA, March 28-30, 1989. [Stewart88] Stewart, W. K., A Model-Based Approach to 3-D Imaging and Mapping Underwater, Proceedings of the 7th International Conference and Exhibit on Offshore Mechanics and Arctic Engineering (OMAE), Houston, Texas, February 7-12, 1988. [Moravec87] Moravec, H. P., Certainty Grids for Sensor Fusion in Mobile Robots, in AI Magazine, Summer 1988, pp 61-77. [Matthies87] Matthies, L. H. and A. Elfes, Sensor Integration for Robot Navigation: Combining Sonar and Stereo Range Data in a Grid-Based Representation, Proceedings of the 26th IEEE Decision and Control Conference, Los Angeles, CA, December 9-11, 1987. [Stewart87] Stewart, W. K., A Non-Deterministic Approach to 3-D Modeling Underwater, 5th Symposium on Unmanned Untethered Submersible Technology, University of New Hampshire, June, 1987. [Serey86] Serey, B. and L. H. Matthies, Obstacle Avoidance using 1-D Stereo Vision, CMU Robotics Institute Report, November, 1986. [Elfes86] Elfes, A. E, Sonar-Based Real-World Mapping and Navigation, IEEE Journal of Robotics and Automation, vRA-3 #3, June 1987, pp. 249-266. [Kadonoff86] Kadonoff, M., F. Benayad-Cherif, A. Franklin, J. Maddox, L. Muller, B. Sert and H. Moravec, Arbitration of Multiple Control Strategies for Mobile Robots, SPIE conference on Advances in Intelligent Robotics Systems, Cambridge, Massachusetts, October 26-31, 1986. In SPIE Proceedings Vol 727, paper 727-10. [Moravec85] Moravec, H. P. and A. E. Elfes, High Resolution Maps from Wide Angle Sonar, proceeding of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, March, 1985, pp 116-121. [Thorpe84] Thorpe, C. E., Path Relaxation: Path Planning for a Mobile Robot, Proceedings of AAAI-84, Austin, Texas, August 6-10, 1984, pp. 318- 321. [Moravec80] Moravec, H. P., Robot Rover Visual Navigation, UMI Research Press, Ann Arbor, Michigan, 1981.