Mars Autonomy Project Stereo Ranging

Mars Autonomy
Technology
    Obstacle Avoidance
Path Planning
Integration
Perception
Publications
Movies
People
Contact us
Related Sites
Internal
Overview
The Mars Pathfinder Sojourner rover used a combination of stereo ranging and a laser light-striper to give ground operators a rich view of the terrain it travelled through, and allow limited autonomy. The Mars Surveyor 2003 and 2005 missions, to which the Mars Autonomy Project plans to contribute, will have similar sensing and much greater autonomy.

We use a stereo calibration and matching algorithm developed by the Machine Vision and Tracking Sensors Group at JPL to convert a pair of greyscale images from our cameras into a cloud of (x,y,z) points on the surface of the terrain ahead. Some of the properties that distinguish stereo ranging from other techniques are:

  • It is very computationally intensive and returns large data sets.
  • Lack of texture in the terrain or unusual lighting conditions can lead to gaps in the data.
  • Precise camera calibration can allow much more efficient matching algorithms.

Our traversability mapper module processes range data to generate traversability maps which rate how easy it is to drive through different points. These maps are passed on to the local obstacle avoidance and global path planning modules which determine the path the rover will take.

 

Camera Calibration
Stereo Cameras

Stereo Calibration Cube
Click for a full-size image

 

Stereo ranging begins with camera calibration. We use a cube with black dots laid out in a square grid. During calibration, a simple image processing algorithm finds the centroids of the dots with some help from a human operator. The positions of the dots let us estimate:
  • Intrinsic parameters, which relate to the camera itself: the properties of the CCD and the lens. Some of these, like the focal length and size of CCD pixels, are known in advance. The radial lens distortion, however, needs to be calculated.

    If the camera had no radial distortion, the image of each plane of the cube would be a perspective transform of the grid (in particular, the dots would be in straight lines). By measuring the deviations from the expected grid, we can come up with a best-fit model of the distortion and correct for it.

  • Extrinsic parameters, which describe the relative position and orientation of the individual cameras to each other and to the world. Because the actual size of the cube is precisely known, the disparity between dot positions in the left and right images lets us infer the length of the stereo baseline. We can also correct for discrepancies in the focal axis and "up" direction of the cameras.

 

Stereo Matching
Stereo Pair: Left Stereo Pair: Right
Click for the full-size stereo pair

 

Stereo Disparity Map

Stereo Point Cloud

 

The Matching Process
Stereo ranging relies on finding the same object in different positions in the left and right camera images. From the disparity between the positions one can infer the range to the object. The image at the left is a map of the disparity of matched objects in the pair above. Shades of green have the lowest disparity (longest range), fading all the way to white, which has highest disparity (shortest range).

You can try doing your own matching on the stereo pair shown above. Look at the pair and pretend you are trying to focus on an object far behind your monitor screen, until the left and right images appear to fuse in the middle. Once you have the middle image in sharp focus, you should be able to perceive depth in the scene like our stereo algorithm. You may have better luck with the full-size image, moving your head at least 3-4 feet from the monitor.

After we have ranges for points on the terrain ahead at various positions in the image, some trigonometry lets us generate (x,y,z) points in the rover's coordinate frame. The image at the lower left is a snapshot from a 3D browser looking at such a point cloud. The black patches are "shadows" behind the rocks, where no data could be collected because the terrain was obscured.

 

Traversability Mapper
Traversability Mapper Screen Shot

 

For navigation purposes, we want to tag each position in the map with a measure of how hazardous it is. This information is useful both for local obstacle avoidance and global path planning.

Using local least-squares plane-fitting on range data, the traversability mapper calculates the expected roll and pitch of the vehicle at each map grid cell in the sensor footprint. The residual error of the plane fit is used as an estimate of the small-scale roughness of the terrain.

With its knowledge of the abilities of the mobile platform, like maximum allowable roll and the chassis clearance, the mapper assigns a traversability score to each position in the map grid. The traversability map is the basis on which our obstacle avoidance and path planning algorithms make their navigation decisions.

The density of range points in each grid cell provides a measure of certainty for the traversability score. The different modules respond to uncertainty in different ways. The path planner optimistically assumes that unknown areas are clear of hazards (allowing it to plan paths through unexplored areas), while the local obstacle avoidance system will not allow the vehicle to move into unknown terrain.

A sample traversability map is shown above. Traversability fades from best (green) to worst (red). Certainty fades from highest (bright red or green) to lowest (black).

 


Last modified: $Date: 1999/06/28 18:04:32 $