Pop-up SLAM: Semantic Monocular Plane SLAM for

Low-texture Environments


   Shichao Yang, Yu Song

   Michael Kaess, Sebastian Scherer

   shichaoy@andrew.cmu.edu

Pop plane

Abstract

Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.

Video



Method Overview

dataset

Pop plane

Paper

(PDF)

Bibtex

[Bibtex]
Last updated: Sept 17, 2016
StatCounter - Free Web Tracker and Counter