Robotics

 

Movement Control Lab

Model-based control through numerical optimization

We are developing new methods for control optimization aimed at real-time robotic control. This is a collaboration betwen Zoran Popovic's group, which developed high-fidelity controllers for physics-based animation, and Emo Todorov's group, which developed efficient algorithms for optimal control as well as a new physics engine tailored to control optimization. Our approach combines offline learning of semi-global value functions and control policies with online trajectory optimization or model-predictive control (MPC).

Design of biomimetic robot hands

We are designing and building state-of-the-art robot hands that aim to replicate the functionality of the human hand. These include the Anatomically Correct Testbed (ACT) hand, which is the most realistic robot hand available, that closely mimics the joint kinematics and tendon networks of the human hand. Emo Todorov's group recently joined this effort, building the Modular Robot (ModBot) hand, whose fingers have characteristics similar to haptic robots in terms of speed and compliance while being designed for manipulation.

 

Human-Centered Robotics Lab

Robot Programming by Demonstration

Robot programming by demonstration has typically been a data-intensive process requiring a lot more and higher-quality demonstrations than a typical user is willing to give. Instead we believe that skills and task can be more effectively transferred to robots through alternative interactions seeded by a single demonstration. Such interactions include interactive visualizations of skills and tasks to allow the user to directly edit them and natural language interactions to augment demonstrations with meta information.

Remote Teleoperation for Mobile Manipulation

Full autonomy of service robots might still be years ahead, but robot telepresence is already accessible to the masses. The next step for these technologies will be to allow performing physical manipulation tasks in remote environments; however, many challenges remain in making such functionality intuitive and safe for everyday users.

Robot Tool Use

One of the most common and most dreaded household chores, cleaning, typically consists of applying a tool on a surface (e.g. wipe, vacuum, dust, sweep, mop, scrub). In this project we aim to form a general framework for representing and learning the use of different cleaning tools by robots and develop interfaces to allow users to command cleaning tasks (i.e. indicate a tool and a target surface) as well as teach the use of novel tools by demonstration.
 

Robotics and State Estimation Lab

SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks

In this work, we explore the use of deep learning for learning a notion of physical intuition. We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of 3D point clouds along with a continuous action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. We show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks, on three simulated scenarios.

Building Hierarchies of Concepts via Crowdsourcing

In this project, we propose a novel crowdsourcing system for inferring hierarchies of concepts, tackling the questions posed above. We develop a principled algorithm powered by the crowd, which is robust to noise, efficient in picking questions, cost-effective, and builds high quality hierarchies.

Graph-Based Inverse Optimal Control for Robot Manipulation

This project explores an approach towards teaching manipulation tasks to robots via human demonstrations. A human demonstrates the desired task (say, carrying a cup of water without spilling) by physically moving the robot. Given many such kinesthetic demonstrations, the robot applies a learning algorithm to learn a model of the underlying task. In a new scene, the robot uses this task-model to plan a path that satisfies the task requirements.

DART: Dense Articulated Real-Time Tracking

This project aims to provide a unified framework for tracking any arbitrary articulated model, given it's geometric and kinematic structure. Our approach uses dense input data (computing an error term on every pixel), which we are able to process in real-time by leveraging the power of GPGPU programming and very efficient representation of model geometry with signed distance functions.

Language Grounding in Robotics

A number of long-term goals in robotics, for example, using robots in household settings; require robots to interact with humans. In this project, we explore how robots can learn to correlate natural language to the physical world being sensed and manipulated, an area of research that falls under grounded language acquisition.

Hierarchical Matching Pursuit for RGB-D Recognition

Hierarchical Matching Pursuit uses sparse coding to learn codebooks at each layer in an unsupervised way and then builds hierarchial feature representations from the learned codebooks. It achieves state-of-the-art results on many types of recognition tasks.

RGB-D Object Dataset

The RGB-D Object Dataset is a large dataset of 300 common household objects. The objects are organized into 51 categories arranged using WordNet hypernym-hyponym relationships (similar to ImageNet). This dataset was recorded using a Kinect style 3D camera that records synchronized and aligned 640x480 RGB and depth images at 30 Hz.

RGB-D Mapping: Using Depth Cameras for Dense 3D Mapping

Simultaneous localization and mapping (SLAM) has been a major focus of mobile robotics work for many years. We combine state-of-the-art visual odometry and pose-graph estimation techniques with a combined color and depth camera to make accurate, dense maps of indoor environments.

RGB-D Object Recognition and Detection

In this project we address joint object category, instance, and pose recognition in the context of rapid advances of RGB-D cameras that combine both visual and 3D shape information. The focus is on detection and classification of objects in indoor scenes, such as in domestic environments

Attribute Based Object Identification

We introduce an approach for identifying objects based on natural language containing appearance and name attributes.

Data-Efficient Robot Reinforcement Learning

This project aims at developing and applying novel reinforcement learning methods to low-cost off-the-shelf robots to make them learn tasks in a few trials only.

Robotic In-Hand 3D Object Modeling

We address the problem of active object investigation using robotic manipulators and Kinect-style RGB-D depth sensors. To do so, we jointly tackle the issues of sensor to robot calibration, manipulator tracking, and 3D object model construction. We additionally consider the problem of motion and grasp planning to maximize coverage of the object.

Object Modeling During Scene Reconstruction

We segment objects during scene reconstruction rather than after as is usual. The emphasis is on merging information we get from different points in time to improve existing object and scene models.

Gaussian Processes for Bayesian State Estimation

The goal of this project is to integrate Gaussian process prediction and observation models into Bayes filters. These GP-BayesFilters are more accurate than standard Bayes filters using parametric models. In addition, GP models naturally supply the process and observation noise necessary for Bayesian filters.

Object Segmentation from Motion

We can't be sure where objects are unless we see them move relative to each other. In this project we investigate using motion as a cue to segment objects. We can make use of passive sensing or active vision, and both long-term and short-term motion, to aid segmentation.

 

Sensor Systems Lab

Hobbes, Our Favorite PR2 Robot

Hobbes is our PR2 Robot, from Willow Garage. It is a very capable mobile manipulation platform that allows us to test our newly developed technologies in a complete, fully functional robotic system, without having to build everything from scratch.

Pre-Touch Sensing with Sea Shell Effect

Seashell Effect Pre-Touch Sensing is a new form of sensing used to help robots sense the shape and material of objects before they grasp. ''Pretouch'' refers to sensing modalities that are intermediate in range between tactile sensing and vision. The novel pretouch technique is effective on materials for which prior pretouch techniques fail. Seashell effect pretouch is inspired by the phenomenon of ''hearing the sea'' when a seashell is held to the ear and relies on the observation that the ''sound of the sea'' changes as the distance from the seashell to the head varies.

Robotic Pile Sorting and Manipulation

We are investigating strategies for robot interaction with piles of objects and materials in cluttered scenes. In particular, interaction with unstructured sets of objects will allow a robot to explore and manipulate novel items in order to perform useful tasks, such as counting, arranging, or sorting even without having a prior model of the objects.

 

Neurobotics Lab

Neurobotics Lab

At the Neurobotics Lab, we focus on creating bio- and neural-inspired robotic algorithms and systems and use them to understand human movements, advance robotic control, and rehabilitate/assist human movement capabilities. We are motivated to provide solutions for people who have difficulty moving around or manipulating objects, while we take pleasure in scientific and engineering contributions we make.

Design of biomimetic robot hands

We are designing and building state-of-the-art robot hands that aim to replicate the functionality of the human hand. These include the Anatomically Correct Testbed (ACT) hand, which is the most realistic robot hand available, that closely mimics the joint kinematics and tendon networks of the human hand. Emo Todorov's group recently joined this effort, building the Modular Robot (ModBot) hand, whose fingers have characteristics similar to haptic robots in terms of speed and compliance while being designed for manipulation.

 

Laboratory for Neural Systems

Neural Systems Lab

The lab's research focuses on understanding the brain using computational models and simulations and applying this knowledge to the task of building intelligent robotic systems and brain-computer interfaces (BCIs). The lab utilizes data and techniques from a variety of fields, ranging from neuroscience and psychology to machine learning and statistics.

Imitation Learning in Humanoid Robots

We are developing new probabilistic methods that allow a humanoid robot to follow gaze, infer intent, and learn new actions and skills from a human teacher in much the same way that human infants and adults learn from observing others. Such an approach opens the door to a potentially powerful way to program general-purpose humanoid robots -- through human demonstration and interaction -- obviating the need for complex physics-based models and explicit programming of behaviors.

Brain-Computer Interfacing

In 2006, our group became one of the first to demonstrate the control of a humanoid robot using a non-invasive brain computer interface (BCI). The system consists of a robot, an electrode cap for sensing brainwaves, and a graphical user interface for controlling the robot remotely. Our original research demonstrated that the BCI can be used to command a HOAP-2 humanoid robot to select and fetch desired objects from remote locations. We have more recently proposed a framework for adaptive hierarchical brain-computer interfacing that allows the user to teach a robot new behaviors on-the-fly.