Robotics
Physically Grounded Language Understanding
Attribute Based Object Identification

Over the last years, the robotics community has made substantial progress in detection and 3D pose estimation of known and unknown objects. However, the question of how to identify objects based on language descriptions has not been investigated in detail. While the computer vision community recently started to investigate the use of attributes for object recognition, these approaches do not consider the task settings typically observed in robotics, where a combination of appearance attributes and object names might be used in referral language to identify specific objects in a scene.
RGB-D Object Dataset
A large dataset of 300 common household objects recorded using a Kinect style 3D camera.
RGB-D Mapping: Using Depth Cameras for Dense 3D Mapping
Hierarchical Matching Pursuit for RGB-D Recognition

Object Segmentation from Motion

We can't be sure where objects are unless we see them move relative to each other. In this project we investigate using motion as a cue to segment objects. We can make use of passive sensing or active vision, and both long-term and short-term motion, to aid segmentation.
Data-Efficient Robot Reinforcement Learning

How long does it take for a robot to learn a task from scratch if no informative prior knowledge is given? Typically, very long. This project aims at developing and applying novel reinforcement learning methods to low-cost off-the-shelf robots to make them learn tasks in a few trials only. We use a standard robot arm by Lynxmotion and a Kinect-depth camera (total cost is 500 USD) and demonstrate that fully autonomous learning (with random intializations) requires only a few trials.
RGB-D Kernel Descriptors

3-D Object Discovery Using Motion

Gaussian Processes for Bayesian State Estimation
Neural Systems Lab
Neurobotics Lab
RGB-D Object Recognition and Detection

Robotic In-Hand 3D Object Modeling
Imitation Learning in Humanoid Robots
We are developing new probabilistic methods that allow a humanoid robot to follow gaze, infer intent, and learn new actions and skills from a human teacher in much the same way that human infants and adults learn from observing others. Such an approach opens the door to a potentially powerful way to program general-purpose humanoid robots -- through human demonstration and interaction -- obviating the need for complex physics-based models and explicit programming of behaviors.
Model-based control through numerical optimization

We are developing new methods for control optimization aimed at real-time robotic control. This is a collaboration betwen Zoran Popovic's group, which developed high-fidelity controllers for physics-based animation, and Emo Todorov's group, which developed efficient algorithms for optimal control as well as a new physics engine tailored to control optimization. Our approach combines offline learning of semi-global value functions and control policies with online trajectory optimization or model-predictive control (MPC).
Design of biomimetic robot hands

We are designing and building state-of-the-art robot hands that aim to replicate the functionality of the human hand. These include the Anatomically Correct Testbed (ACT) hand, which is the most realistic robot hand available, that closely mimics the joint kinematics and tendon networks of the human hand. Emo Todorov's group recently joined this effort, building the Modular Robot (ModBot) hand, whose fingers have characteristics similar to haptic robots in terms of speed and compliance while being designed for manipulation.
Robotic Pile Sorting and Manipulation
We are investigating strategies for robot interaction with piles of objects and materials in cluttered scenes. In particular, interaction with unstructured sets of objects will allow a robot to explore and manipulate novel items in order to perform useful tasks, such as counting, arranging, or sorting even without having a prior model of the objects.
Hobbes, Our Favorite PR2 Robot
Hobbes is our PR2 Robot, from Willow Garage. It is a very capable mobile manipulation platform that allows us to test our newly developed technologies in a complete, fully functional robotic system, without having to build everything from scratch.
Pre-Touch Sensing with Sea Shell Effect
Seashell Effect Pre-Touch Sensing is a new form of sensing used to help robots sense the shape and material of objects before they grasp. ''Pretouch'' refers to sensing modalities that are intermediate in range between tactile sensing and vision. The novel pretouch technique is effective on materials for which prior pretouch techniques fail. Seashell effect pretouch is inspired by the phenomenon of ''hearing the sea'' when a seashell is held to the ear and relies on the observation that the ''sound of the sea'' changes as the distance from the seashell to the head varies.
Brain-Computer Interfacing
In 2006, our group became one of the first to demonstrate the control of a humanoid robot using a non-invasive brain computer interface (BCI). The system consists of a robot, an electrode cap for sensing brainwaves, and a graphical user interface for controlling the robot remotely. Our original research demonstrated that the BCI can be used to command a HOAP-2 humanoid robot to select and fetch desired objects from remote locations. We have more recently proposed a framework for adaptive hierarchical brain-computer interfacing that allows the user to teach a robot new behaviors on-the-fly.










cs.