Simultaneous localization and mapping (SLAM) has been a major focus of mobile robotics work for many years. We combine state-of-the-art visual odometry and pose-graph estimation techniques with a combined color and depth camera to make accurate, dense maps of indoor environments. Adding depth to conventional color-camera techniques improves both the accuracy and the denseness of our maps.

The goal of this research is to create dense 3D point cloud maps of building interiors using newly available inexpensive depth cameras, such as the Microsoft Kinect. These cameras provide per-pixel depth information aligned with image pixels from a standard camera. We refer to this as RGB-D data (for Red, Green, Blue plus Depth).

We collaborated with a team at MIT using a Kinect mounted on a quadrotor to localize and build 3D maps using our system as shown in the this video. More information on this collaboration is available here.

These techniques have been extended to real-time interactive mapping, allowing users to collect accurate maps and observe map completeness during data capture. These maps have applications in localization, virtual remodeling, and telepresense.

This work has been funded in part by the National Science Foundation under award number IIS-0812671.