Robotics

Physically Grounded Language Understanding

A number of long-term goals in robotics – for example, using robots in household settings – require robots to interact with humans. In this project, we explore how robots can learn to correlate natural language to the physical world being sensed and manipulated, an area of research that falls under grounded language acquisition.

Attribute Based Object Identification

Over the last years, the robotics community has made substantial progress in detection and 3D pose estimation of known and unknown objects. However, the question of how to identify objects based on language descriptions has not been investigated in detail. While the computer vision community recently started to investigate the use of attributes for object recognition, these approaches do not consider the task settings typically observed in robotics, where a combination of appearance attributes and object names might be used in referral language to identify specific objects in a scene.

RGB-D Object Dataset

A large dataset of 300 common household objects recorded using a Kinect style 3D camera.

RGB-D Mapping: Using Depth Cameras for Dense 3D Mapping

We align RGB-D (Red, Green, Blue plus Depth) point clouds acquired with a depth camera to create globally consistent dense 3D maps.

Hierarchical Matching Pursuit for RGB-D Recognition

Hierarchical Matching Pursuit uses sparse coding to learn codebooks at each layer in an unsupervised way and then builds hierarchial feature representations from the learned codebooks. It achieves state-of-the-art results on many types of recognition tasks.

Object Segmentation from Motion

We can't be sure where objects are unless we see them move relative to each other. In this project we investigate using motion as a cue to segment objects. We can make use of passive sensing or active vision, and both long-term and short-term motion, to aid segmentation.

Data-Efficient Robot Reinforcement Learning

How long does it take for a robot to learn a task from scratch if no informative prior knowledge is given? Typically, very long. This project aims at developing and applying novel reinforcement learning methods to low-cost off-the-shelf robots to make them learn tasks in a few trials only. We use a standard robot arm by Lynxmotion and a Kinect-depth camera (total cost is 500 USD) and demonstrate that fully autonomous learning (with random intializations) requires only a few trials.

RGB-D Kernel Descriptors

Kernel descriptors is a general approach that extracts multi-level representations from high-dimensional structured data such as images, depth maps, and 3D point clouds.

3-D Object Discovery Using Motion

In contrast to object recognition or object detection, which match data to existing object models, object discovery creates object models. Obviously, we need information sources to compensate for the lack of models. In this project, we investigate using 3-D motion of surface patches between multiple maps of the same environment as such a cue.

Gaussian Processes for Bayesian State Estimation

The goal of this project is to integrate Gaussian process prediction and observation models into Bayes filters. These GP-BayesFilters are more accurate than standard Bayes filters using parametric models. In addition, GP models naturally supply the process and observation noise necessary for Bayesian filters.

Neural Systems Lab

The lab's research focuses on understanding the brain using computational models and simulations and applying this knowledge to the task of building intelligent robotic systems and brain-computer interfaces (BCIs). The lab utilizes data and techniques from a variety of fields, ranging from neuroscience and psychology to machine learning and statistics.

Neurobotics Lab

At the Neurobotics Lab, we focus on creating bio- and neural-inspired robotic algorithms and systems and use them to understand human movements, advance robotic control, and rehabilitate/assist human movement capabilities. We are motivated to provide solutions for people who have difficulty moving around or manipulating objects, while we take pleasure in scientific and engineering contributions we make.

RGB-D Object Recognition and Detection

In this project we address joint object category, instance, and pose recognition in the context of rapid advances of RGB-D cameras that combine both visual and 3D shape information. The focus is on detection and classification of objects in indoor scenes, such as in domestic environments

Robotic In-Hand 3D Object Modeling

We address the problem of active object investigation using robotic manipulators and Kinect-style RGB-D depth sensors. To do so, we jointly tackle the issues of sensor to robot calibration, manipulator tracking, and 3D object model construction. We additionally consider the problem of motion and grasp planning to maximize coverage of the object.

Imitation Learning in Humanoid Robots

We are developing new probabilistic methods that allow a humanoid robot to follow gaze, infer intent, and learn new actions and skills from a human teacher in much the same way that human infants and adults learn from observing others. Such an approach opens the door to a potentially powerful way to program general-purpose humanoid robots -- through human demonstration and interaction -- obviating the need for complex physics-based models and explicit programming of behaviors.

Model-based control through numerical optimization

We are developing new methods for control optimization aimed at real-time robotic control. This is a collaboration betwen Zoran Popovic's group, which developed high-fidelity controllers for physics-based animation, and Emo Todorov's group, which developed efficient algorithms for optimal control as well as a new physics engine tailored to control optimization. Our approach combines offline learning of semi-global value functions and control policies with online trajectory optimization or model-predictive control (MPC).

Design of biomimetic robot hands

We are designing and building state-of-the-art robot hands that aim to replicate the functionality of the human hand. These include the Anatomically Correct Testbed (ACT) hand, which is the most realistic robot hand available, that closely mimics the joint kinematics and tendon networks of the human hand. Emo Todorov's group recently joined this effort, building the Modular Robot (ModBot) hand, whose fingers have characteristics similar to haptic robots in terms of speed and compliance while being designed for manipulation.

Robotic Pile Sorting and Manipulation

We are investigating strategies for robot interaction with piles of objects and materials in cluttered scenes. In particular, interaction with unstructured sets of objects will allow a robot to explore and manipulate novel items in order to perform useful tasks, such as counting, arranging, or sorting even without having a prior model of the objects.

Hobbes, Our Favorite PR2 Robot

Hobbes is our PR2 Robot, from Willow Garage. It is a very capable mobile manipulation platform that allows us to test our newly developed technologies in a complete, fully functional robotic system, without having to build everything from scratch.

Pre-Touch Sensing with Sea Shell Effect

Seashell Effect Pre-Touch Sensing is a new form of sensing used to help robots sense the shape and material of objects before they grasp. ''Pretouch'' refers to sensing modalities that are intermediate in range between tactile sensing and vision. The novel pretouch technique is effective on materials for which prior pretouch techniques fail. Seashell effect pretouch is inspired by the phenomenon of ''hearing the sea'' when a seashell is held to the ear and relies on the observation that the ''sound of the sea'' changes as the distance from the seashell to the head varies.

Brain-Computer Interfacing

In 2006, our group became one of the first to demonstrate the control of a humanoid robot using a non-invasive brain computer interface (BCI). The system consists of a robot, an electrode cap for sensing brainwaves, and a graphical user interface for controlling the robot remotely. Our original research demonstrated that the BCI can be used to command a HOAP-2 humanoid robot to select and fetch desired objects from remote locations. We have more recently proposed a framework for adaptive hierarchical brain-computer interfacing that allows the user to teach a robot new behaviors on-the-fly.