Semantic Mapping at the RSE-lab

The goal of this project is to generate maps that describe environments in terms of objects and places. Such representations contain far more useful information than traditional maps, and enable robots to interact with humans in a more natural way.

Project Contributors

Dieter Fox, Bertrand Douillard, Stephen Friedman, Benson Limketkai, Fabio Ramos


Learning to label places

To label the places in an indoor environment, our robot first builds an occupancy grid map using a laser range finder (click here for info on the mapping step). It then labels every point on the Voronoi graph of this map by taking local shape and connectivity information into account. The parameters of the underlying statistical model are learned from previously explored environments.
The left image shows an occupancy map along with the Voronoi graph (skeleton of the freespace) exracted from the map. Every point on the Voronoi graph is labeled using a Conditional Random Field. The center image shows the resulting rooms (green), hallways (pink), and doorways (blue). The labeling and connectivity structure is used to generate a mixed topological / metric representation, as shown in the right image. Note that the robot has not "seen" this environment before. In addition to place labeling, we develop techniques that enable robots to detect differet objects in indoor environments, see publications below.

Object detection in urban environments

In collaboration with the Australian Centre for Field Robotics, we use a car equipped with a laser range finder and a camera to detect objects in urban environments. The laser points are projected into the camera images and then labeled by the type of object they point at. This labeling is done using a Conditional Random Field that takes both shape and appearance information into account. Click here to see an animation demonstrating the approach.

Main publications