RGB-D Kernel descriptors is a general approach that extracts multi-level representations from high-dimensional structured data such as images, depth maps, and 3D point clouds.
Novel reinforcement learning methods that learn tasks in a few trials only and can run on real (non-simulated) robots in a reasonable amount of time.
The Gambit manipulator is a novel robotic arm combined with an
RGBD camera, used for interacting dextrously with small-scale
physical objects, as in game playing.
In this project we address joint object category, instance, and pose recognition in
the context of rapid advances of RGB-D cameras that combine both visual and
3D shape information. The focus is on detection and classification of
objects in indoor scenes, such as in domestic environments.
A large dataset of 300 common household objects recorded using a Kinect style 3D camera.
We align RGB-D (Red, Green, Blue plus Depth) point clouds acquired with a depth camera to create globally consistent dense 3D maps.
We address the problem of active object investigation using robotic manipulators and Kinect-style RGB-D depth sensors. To do so, we jointly tackle the issues of sensor to robot calibration, manipulator tracking, and 3D object model construction. We additionally consider the problem of motion and grasp planning to maximize coverage of the object.
In this project we use inverse reinforcement learning to train a planner for natural and efficient robotic motion in crowded environments.
The goal of this project is to integrate Gaussian process prediction
and observation models into Bayes filters. These GP-BayesFilters are
more accurate than standard Bayes filters using parametric models. In
addition, GP models naturally supply the process and observation noise
necessary for Bayesian filters.
The goal of this project is to generate models that describe
environments in terms of objects and places. Such representations
contain far more useful information than traditional maps, and enable
robots to interact with humans in a more natural
This project aims at learning and estimating high-level activities
from raw sensor data. To do so, we strongly rely on the etimates
generated by our people tracking approaches. We recently
demonstrated that it is possible to learn typical outdoor
navigation patterns of a person using raw GPS data. For example,
our approach uses EM to learn where a person typically gets on or
off the bus. Such techniques allow hand-held computer devices to
assist people with cognitive disorders during their everyday
We are interested in the development of robust and efficient map
buiding techniques. We developed different solutions to this
problem, ranging from expectation maximization (EM) to
Rao-Blackwellised particle filters. We also introduced novel
coordination strategies for large teams of mobile robots. Within
the CentiBots project, we developed a decision-theoretic approach
that enables teams of robots to build a consistent map of an
environment even when the robots start from different, completely
The task sounds simple: Program Sony AIBO robots to play
soccer. We use RoboCup to investigate techniques for multi-robot
collaboration, active sensing, and efficient state estimation.
Our multi-model technique for ball tracking allows our robots to
accurately track the ball and its interactions with the
environment; even under the highly non-linear dynamics typically
occuring during a soccer game. Our active sensing strategy is
based on reinforcement learning. It takes into account
which uncertainty has to be minimized at each
point in time (for example, relative ball position uncertainty
vs. robot location uncertainty).
Robot localization is an important application driving our
research in belief representations and particle filtering for
state estimation. Localization is one of the most fundamental
problems in mobile robotics. With our collaborators, we introduced
grid-based approaches, tree-based representations, and particle
filters for robot localization. We were the first to solve the
global localization problem, which requires a robot to estimate
its position within an environment from scratch, i.e., without
knowledge of its start position.
Knowing and predicting the locations of people moving through an
environment is a key component of many pro-active service
applications, including mobile robots. Depending on the task and
the available sensors, we apply joint probabilistic data
association filters, Rao-Blackwellised particle filters, and
Voronoi-based particle filters to estimate locations of
people. Such estimates build he foundations for learning typical
motion patterns of people, as used in the activity recognition
The plant care project helps us to investigate how mobile robots
can interact with environments that are equipped with networks of
sensors. The task of the robot is to water the plants and
calibrate the sensors in the environment.
The reliability of probabilistic methods for mobile robot
navigation has been demonstrated during the deployment of the
mobile robots Rhino and Minerva as tour-guides in two populated
museums. The task of these robots was to guide people through the
exhibitions of the ``Deutsches Museum Bonn'',
Germany, and the ``National Museum of American History'' in