Robotics Colloquium

The Robotics Colloquium features talks by invited and local researchers on all aspects of robotics, including control, perception, machine learning, mechanical design and interaction. The colloquium is held Fridays between 1:30-2:30pm. Special seminars outside this schedule are indicated with * below. Check schedule below for location. Refreshments are served.

If you would like to give a talk in upcoming Robotics Colloquia, please contact Maya Cakmak (mcakmakatcs). If you would like to get regular email announcements and reminders about the robotics colloquium speakers, please sign up to the Robotics@UW mailing list.

Autumn 2018 Organizers: Tapomayukh Bhattacharjee, Maya Cakmak, Dieter Fox, Siddhartha S. Srinivasa
CSE 305
Brittany Duncan
University of Nebraska-Lincoln
Drones in Public: distancing and communication with general users abstract
CSE 305
Sisir Karumanchi
NASA Jet Propulsion Laboratory
Leveraging Proprioceptive Feedback for Mobile Manipulation abstract
CSE 305
Magnus Egerstedt
Georgia Institute of Technology
Long Duration Autonomy and Constraint-Based Coordination of Multi-Robot Systems abstract
CSE 305
Chelsea Finn
Google Brain / Stanford University
Building unsupervised, versatile agents with meta-learning abstract
CSE 305
Samir Gadre
Virtual and Mixed Reality Interfaces for Human-Robot Interaction abstract
CSE 305
Jivko Sinapov
Tufts University
Symbol Grounding through Behavioral Exploration and Multisensory Perception: Solutions and Open Problems abstract
CSE 305
Animesh Garg
Nvidia AI Research Lab / Stanford AI Lab
Towards Generalizable Autonomy in Robotics abstract
CSE 305
Günter Niemeyer
Disney Research
Robots, Disney, and Touch - Can we get closer to our robots? abstract
Leslie Kaelbling
Doing for our robots what evolution did for us abstract
CSE 305
Maxim Likhachev
Search-based Planning for High-dimensional Robotic Systems Using Ensembles of Solutions to Their Low-dimensional Abstractions abstract
Spring / Summer 2018 Organizers: Dieter Fox, Maya Cakmak, Siddhartha S. Srinivasa
CSE 203
Michael Beetz
University Bremen, IAI
Everyday Activity Science and Engineering (EASE) abstract
No talk
CSE 305
David Rollinson
Hebi Robotics
Building a Force-Controlled Actuator (Company) abstract
CSE 305
Michael A. Goodrich
Brigham Young University
Toward Human Interaction with Bio-Inspired Robot Swarms abstract
CSE 305
Dmitry Berenson
University of Michigan
What Matters for Deformable Object Manipulation abstract
CSE 305
Jeff Mahler
University of California Berkeley
The Dexterity Network: Deep Learning to Plan Robust Robot Grasps using Datasets of Synthetic Point Clouds, Analytic Grasp Metrics, and 3D Object Models abstract
CSE 305
Karen Liu
Georgia Tech
Towards a Generative Model of Natural Motion abstract
CSE 305
Jung-Su Ha
Recent Advances in Representation Learning for Dynamical Systems abstract
CSE 305
Guy Hoffman
Cornell University
Designing Robots for Fluent Collaboration and Companionship abstract
CSE 305
Devin Balkcom
Dartmouth College
Economy of Motion abstract
No talk
No talk
CSE 305
Soshi Iba
Honda Research Institute
Toward a future society with Curious Minded Machines abstract
CSE 305
(12 PM)
Matt Barnes
Carnegie Mellon University
Learning with Clusters: A cardinal machine learning sin and how to correct for it abstract
Winter 2018 Organizers: Dieter Fox, Maya Cakmak, Siddhartha S. Srinivasa, Kat Steele
No talk
CSE 691
(10 AM)
Allison Okamura
Stanford University
Let’s be Flexible: Soft Haptics and Soft Robotics abstract
HUB 250
(1 PM)
David Reinkensmeyer
UC Irvine
Robotic-assisted movement training after stroke: Why does it work and how can it be made to work better? abstract
CSE 305
Stefanos Nikolaidis
CMU/University of Washington
Mathematical Models of Adaptation in Human-Robot Collaboration abstract
No talk
CSE 305
(3 PM)
Peter Trautman
1-Dimensional Joint Probability Distributions: the Duality of Shared Control and Crowd Navigation Solutions abstract
EEB 037
Amir Rubin
SLAM and 3D-reconstruction for Real World Use Cases abstract
CSE 305
Emel Demircan
California State University
Human Movement Understanding abstract
CSE 305
Kris Hauser
Duke University
The Space of Spaces: Understanding the Structure Between Motion Planning Problems abstract
CSE 305
Marco Pavone
Stanford University
Planning and Decision Making for Autonomous Spacecraft and Space Robots abstract
CSE 305
Yu Xiang
Perceiving the 3D World from Images and Videos abstract
Autumn 2017 Organizers: Dieter Fox, Maya Cakmak, Siddhartha S. Srinivasa, Kat Steele, Sam Burden
MEB 238
3:30 PM
Michael Tolley
University of California San Diego
ME colloquium: Soft Robotics abstract
CSE 305
Geoffrey A. Hollinger
Oregon State University
Marine Robotics: Planning, Decision Making, and Learning abstract
10/13/2017 DUB retreat
No talk
CSE 305
Byron Boots
Georgia Institute of Technology
Learning Perception and Control for Agile Off-Road Autonomous Driving abstract
CSE 691
Tucker Hermans
University of Utah
Learning and Planning for Autonomous Multi-fingered Robot Manipulation abstract
CSE 305
Joydeep Biswas
University of Massachusetts Amherst
Deploying Autonomous Service Mobile Robots, And Keeping Them Autonomous abstract
11/10/2017 No talk
CSE 691
Oren Salzman
Carnegie Mellon University
The Provable Virtue of Laziness in Motion Planning abstract
11/24/2017 Thanksgiving
No talk
12/01/2017 No talk
CSE 305
Yigit Menguc
Oregon State University
Material Robotics: Soft active materials, bioinspired mechanisms, and additive manufacturing abstract
Spring 2017 Organizers: Aaron Walsman, Sam Burden, Maya Cakmak, Dieter Fox
CSE 305
Richard Vaughan
Simon Fraser University
Simple, Robust Interaction Between Humans and Teams of Robots abstract
CSE 691
Oussama Khatib
Ocean One: A Robotic Avatar for Oceanic Discovery abstract
CSE 305
Debadeepta Dey
Microsoft Research
Learning via Interaction for Machine Perception and Control abstract
CSE 303 (11 AM)
Eric Eaton
University of Pennsylvania
Efficient Lifelong Machine Learning: an Online Multi-Task Learning Perspective abstract
CSE 305
Katsu Ikeuchi
Microsoft Research
e-Intangible Heritage, from Dancing robots to Cyber Humanities abstract
CSE 691
Henrik Christensen
UC San Diego
Object Based Mapping abstract
CSE 305
Alberto Rodriguez
Reactive Robotic Manipulation abstract
CSE 305
Charlie Kemp
Georgia Tech
Mobile Manipulators for Intelligent Physical Assistance abstract
CSE 305
Karol Hausman
University of Southern California
Rethinking Perception-Action Loops abstract
CSE 305
Silvia Ferrari
Cornell University
Neuromorphic Planning and Control of Insect-scale Robots abstract
No Colloquium (ICRA)    
Winter 2017 Organizers: Sam Burden, Maya Cakmak, Dieter Fox
01/06/2017 No colloquium    
Matt Rueben
Oregon State University
Privacy Sensitive Robotics abstract
Frontiers of Science
Savery Hall 260
01/27/2017 Ross Hatton
Oregon State University
Snakes & Spiders, Robots & Geometry abstract
02/03/2017 Avik De
University of Pennsylvania
Anchored Behaviors from Template Compositions abstract
02/10/2017 No talk    
02/17/2017 Sonia Chernova
Georgia Institute of Technology
Reliable Robot Autonomy through Learning and Interaction abstract
02/24/2017 No talk    
03/03/2017 No talk    
03/10/2017 No talk    
Fall 2016 Organizers: Sam Burden, Maya Cakmak, Dieter Fox, Sawyer Fuller
CSE 305
David Remy
University of Michigan
Gaits and Natural Dynamics in Robotic Legged Locomotion abstract
IROS 2016 and DUB retreat
No talk
10/21/2016 Industry Affiliates Week
Check out talks and posters by robotics students
CSE 305
Emo Todorov
University of Washington
Goal-directed Dynamics abstract
CSE 305
Sean Andrist
Microsoft Research
Gaze Mechanisms for Situated Interaction with Embodied Agents abstract
Veterans day
No talk
CSE 305
Nick Roy
Planning to Fly (and Drive) Aggressively abstract
No talk
CSE 305
Shai Revzen
University of Michigan
Seeking simple models for multilegged locomotion: hybrid oscillators, rapid manufacturing, and slippage abstract
CSE 305
Ashis Banerjee
University of Washington
Toward Real-Time Motion Planning and Control of Optically Actuated Micro-Robots abstract
Spring 2016 Organizers: Justin Huang, Leah Perlmutter, Dieter Fox, and Maya Cakmak
CSE 305
Tomás Lozano-Pérez
Integrated task and motion planning in belief space abstract
CSE 305
Henny Admoni
CMU / Yale
Recognizing Human Intent for Assistive Robotics abstract
CSE 305
Wolfram Burgard
University of Frieburg
Deep Learning for Robot Navigation and Perception abstract
CSE 305
Travis Deyle
Cobalt Robotics
RFID-Enhanced Robots Enable New Applications in Healthcare, Asset Tracking, and Remote Sensing abstract
CSE 305
Brian Scassellati
Robots That Teach abstract
CSE 305
Sarah Elliott, Mohammad Haghighipanah, Vikash Kumar, Yangming Li, Muneaki Miyasaka, Leah Perlmutter, Luis Puig, and Yuyin Sun
University of Washington
ICRA 2016 Practice Talks abstract
CSE 305
Sidd Srinivasa
Physics-based Manipulation abstract
CSE 403
3:30 pm
Ashish Kapoor
Microsoft Research
Planetary Scale Swarm Sensing, Planning and Control for Weather Prediction abstract
Winter 2016 Organizers: Kendall Lowrey, Patrick Lancaster, and Dieter Fox
CSE 305
Daniel Butler
Model-based Reinforcement Learning with Parametrized Physical Models and Optimism-Driven Exploration abstract
CSE 305
James Youngquist
DeepMPC: Learning Deep Latent Features for Model Predictive Control abstract
CSE 305
Justin Huang
Place Recognition with ConvNet Landmarks: Viewpoint-Robust, Condition-Robust, Training-Free abstract
EEB 303
Daniel Gordon
Deep Neural Decision Forests abstract
CSE 305
Harley Montgomery
End-to-End Training of Deep Visuomotor Policies abstract
CSE 305
Aaron Walsman
Mastering the game of Go with deep neural networks and tree search abstract
CSE 305
Zachary Nehrenberg
Real-Time Trajectory Generation for Quadrocopters abstract
CSE 305
Patrick Lancaster
Towards Learning Hierarchical Skills for Multi-Phase Manipulation Tasks abstract
CSE 305
Tanner Schmidt
Pose Estimation of Kinematic Chain Instances via Object Coordinate Regression abstract
CSE 305
Kendall Lowrey
Combining the benefits of function approximation and trajectory optimization abstract
CSE 305
Vladimir Korukov
Information-Theoretic Planning with Trajectory Optimization for Dense 3D Mapping abstract
Fall 2015 Organizers: Tanner Schmidt and Dieter Fox
CSE 305
Dan Bohus
Microsoft Research
Physically Situated Dialog: Opportunities and Challenges abstract
CSE 305
Sawyer Fuller
Aerial autonomy at insect scale: What flying insects can tell us about robotics and vice versa abstract
Kane Hall 110
Russ Tedrake
From Polynomials to Humanoid Robots
Part of the MathAcrossCampus Colloquium Series
CSE 305
Frank Dellaert
Factor Graphs for Flexible Inference in Robotics and Vision abstract
CSE 305
Student Research Lightning Talks    
CSE 305
Louis-Philippe Morency
Modeling Human Communication Dynamics abstract
CSE 305
Tom Whelan
Oculus Research
Real-time dense methods for 3D perception abstract
CSE 305
No talk
CSE 305
Seth Hutchinson
Robust Distributed Control Policies for Multi-Robot Systems abstract
CSE 305
Dmitry Berenson
Toward General-Purpose Manipulation of Deformable Objects abstract
Spring 2015 Organizers: Connor Schenck, Maya Cakmak, Dieter Fox
CSE 303
Neil Lebeck and Natalie Brace
Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor abstract
CSE 303
Peter Henry
LSD-SLAM: Large-Scale Direct Monocular SLAM abstract
CSE 303
Dan Butler
Probabilistic Segmentation and Targeted Exploration of Objects in Cluttered Environments abstract
CSE 303
Marc Deisenroth
Imperial College, London
Statistical Machine Learning for Autonomous Systems and Robots abstract
CSE 303
Arunkumar Byravan and Kendall Lowrey
Reinforcement Learning in Robotics: A Survey abstract
05/22/15   ICRA practice talks. abstracts
05/29/15 No Colloquium Colloquium cancelled for ICRA 2015.  
CSE 303
Jim Youngquist
A Strictly Convex Hull for Computing Proximity Distances With Continuous Gradients abstract
Winter 2015 Organizers: Connor Schenck, Maya Cakmak, Dieter Fox
CSE 305
Mike Chung
Accelerating Imitation Learning through Crowdsourcing abstract
  Tanner Schmidt
Dense Articulated Real-Time Tracking abstract
CSE 305
Discussion Amazon Picking Challenge <
01/30/15 No colloquium    
CSE 305
Joseph Xu
Design and Control of an Anthropomorphic Robotic Hand: Learning Advantages From the Human Body & Brain abstract
  Vikash Kumar
Dimensionality Augmentation: A tool towards synthesizing complex and expressive behaviors abstract
CSE 305
Sofia Alexandrova
RoboFlow: A Flow-based Visual Programming Language for Mobile Manipulation Tasks abstract
CSE 305
Igor Mordatch
Synthesis of Interactive Control for Diverse Complex Characters with Neural Networks abstract
CSE 305
Richard Newcombe
DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time abstract
CSE 691
Aaron Steinfeld
Carnegie Mellon University
Understanding and Creating Appropriate Robot Behavior abstract
CSE 305
Luis Puig
Overview of Omnidirectional Vision abstract
Autumn 2014 Organizers: Vikash Kumar, Maya Cakmak, Dieter Fox
CSE 305
Danny Kaufman
Adobe Creative Technologies Lab, Seattle
Geometric Algorithms for Computing Frictionally Contacting Systems abstract
EE 037
Dubi Katz & Michael Abrash
Oculus VR
VR, the future, and you abstract
CSE 503
Kira Mourao
PostDoc, Institute for Language, Cognition and Computation, University of Edinburgh
What happens if I push this button? Learning planning operators from experience abstract
CSE 305
Sam Burden
PostDoc, University of California, Berkeley
Hybrid Models for Dynamic and Dexterous Robots abstract
*10/29/14 (Wed)
HUB 250
Bilge Mutlu
University of Wisconsin, Madison
Human-Centered Principles and Methods for Designing Robotic Technologies
(Joint with DUB seminar, lunch will be served at 12:00)
CSE 305
Sergey Levine
PostDoc, University of California, Berkeley
Learning to Move: Machine Learning for Robotics and Animation abstract
11/07/14 No talk    
11/14/14 Sachin Patil
PostDoc, University of California, Berkeley
Coping with Uncertainty in Robotic Navigation and Manipulation abstract
Gates Commons
HRI Mini Symposium
HRI 2015 Program committee members
11/28/14 No talk, Thanksgiving Break    
CSE 305
Marianna Madry
Royal Institute of Technology (KTH), Sweden
Representing Objects in Robotics from Visual, Depth and Tactile Sensing abstract
*12/18/14 (Thu)
CSE 305
Scott Niekum
Carnegie Mellon University
Structure Discovery in Robotics with Demonstrations and Active Learning abstract
Winter 2014, Organizer: Maya Cakmak
CSE 403
Byron Boots
Learning Better Models of Dynamical Systems abstract
CSE 403
Julie Shah
Integrating Robots into Team-Oriented Environments abstract
CSE 403
Ryan Calo
UW Law
Robotics & The New Cyberlaw abstract
CSE 403
James McLurkin
Rice University
Distributed Algorithms for Robot Recovery, Multi-Robot Triangulation, and Advanced Low-Cost Robots abstract
02/14/14   Cancelled  
CSE 403
Mihai Jalobeanu
Microsoft Research
Towards ubiquitous robots abstract
CSE 403
Cynthia Matuszek
Talking to Robots: Learning to Ground Human Language in Perception and Execution abstract
03/07/14   Cancelled  
CSE 403
Peter H. Kahn, Jr.
UW Psychology
Social and Moral Relationships with Robots abstract
Gates Commons
Gur Kimchi
Amazon Prime Air abstract
Autumn 2013, Organizer: Maya Cakmak
MGH 241
Ashutosh Saxena
Cornell University
How should a robot perceive the world?
(Joint with Machine Learning)
10/18/13   UW/MSR Machine Learning day  
CSE 403
Kat Steele
University of Washington, Mechanical Engineering
Strategies for understanding and improving movement disorders abstract
CSE 403
Maya Cakmak
University of Washington, CSE
Towards seamless human-robot hand-overs abstract
*11/07/13 (Thu)
CSE 403
Ross A. Knepper
Autonomous Assembly In a Human World abstract
CSE 403
Brian Ziebart
University of Illinois, Chicago
Beyond Conditionals: Structured Prediction for Interacting Processes
(Lunch will be served)
Gates Commons
Jenay Beer
University of South Carolina
Considerations for Designing Assistive Robotics to Promote Aging-in-Place abstract
CSE 403
Dinei Florencio
Microsoft Research
Navigation for telepresence robots and some thoughts on robot learning abstract
CSE 403
Andrzej Pronobis
University of Washington, CSE
Semantic Knowledge in Mobile Robotics: Perception, Reasoning, Communication and Actions abstract
Gates Commons
Steve Cousins
Savioke, Inc. & Willow Garage, Inc.
It's Time for Service Robots
(Joint with CSNE)
Spring 2013, Organizers: Cynthia Matuszek, Dieter Fox
04/5/13 Dieter Fox
Cynthia Matuszek
PechaKucha 20x20 for Robotics abstract
04/12/13 No talk    
04/19/13 Robotics Students & Staff PechaKucha-style Robotics Research Overviews abstract
04/26/13 Pete Wurman
Special Wednesday Colloquium, CSE 203
Coordinating Hundreds of Autonomous Vehicles in Warehouses
04/26/13 Matt Mason Learning to Use Simple Hands abstract
05/03/13 Nadia Shouraboura Canceled  
05/10/13 No talk (ICRA)
05/17/13 Tom Daniel Control and Dynamics of Animal Flight: Reverse Engineering Nature's Robots abstract
05/24/13 Katherine Kuchenbecker The Value of Tactile Sensations in Haptics and Robotics abstract
05/31/13 Pieter Abbeel Machine Learning and Optimization for Robotics abstract
06/07/13 Nick Roy Canceled  
Winter 2013, Organizer: Dieter Fox
01/18/13 Robotics and State
Estimation Lab
Overview of RSE Lab Research
01/25/13 Joshua Smith Robotics Research in the Sensor Systems Group abstract
02/01/13 no talk    
02/08/13 Gaurav Sukhatme Persistent Autonomy at Sea abstract
02/15/13 Jiri Najemnik Sequence Optimization in Engineering, Artificial Intelligence and Biology abstract
02/22/13 no talk    
03/01/13 Richard Newcombe Beyond Point Clouds: Adventures in Real-time Dense SLAM abstract
03/08/13 Tom Erez Model-Based Optimization for Intelligent Robot Control abstract
03/15/13 Byron Boots Spectral Approaches to Learning Dynamical Systems abstract
Spring 2012, Organizer: Dieter Fox
3/30/12 Andrea Thomaz Designing Learning Interactions for Robots abstract
4/6/12 Javier Movellan Towards a New Science of Learning abstract
4/13/12 Emanuel Todorov Automatic Synthesis of Complex Behaviors with Optimal Control abstract
4/20/12 Andrew Barto Autonomous Robot Acquisition of Transferable Skills abstract
4/27/12 Dieter Fox Grounding Natural Language in Robot Control Systems abstract
5/4/12 Allison Okamura Robot-Assisted Needle Steering abstract
5/11/12 Blake Hannaford Click the Scalpel -- Better Patient Outcomes by Advancing Robotics in Surgery abstract
5/18/12 no talk  
5/25/12 Malcolm MacIver Robotic Electrolocation abstract
6/1/12 Drew Bagnell Imitation Learning, Inverse Optimal Control and Purposeful Prediction abstract
09/28/2018 Brittany Duncan
University of Nebraska-Lincoln
Drones in Public: distancing and communication with general users
Abstract: This talk will focus on the role of human-robot interaction with drones in public spaces and be focused on two individual research areas: proximal interactions in shared spaces and improved communication with both end-users and bystanders. Prior work on human-interaction with aerial robots has focused on communication from the users or about the intended direction of flight, but has not considered how to distance from and communicate to novice users in unconstrained environments. In this presentation, it will be argued that the diverse users and open-ended nature of public interactions offers a rich exploration space for foundational interaction research with aerial robots. Findings will be presented from both lab-based and design studies, while context will be provided from the field-based research that is central to the NIMBUS lab. This presentation will be of interest to researchers and practitioners in the robotics community, as well as those in the fields of human factors, artificial intelligence, and the social sciences.

Speaker’s Bio: Dr. Brittany Duncan is an Assistant Professor in Computer Science and Engineering and a co-Director of the NIMBUS lab at the University of Nebraska, Lincoln. Her research is at the nexus of behavior-based robotics, human factors, and unmanned vehicles; specifically she is focused on how humans can more naturally interact with robots, individually or as part of ad hoc teams, in field-based domains such as agricultural, disaster response, and engineering applications. She is a PI on a NSF Early Faculty Career Award (CAREER), a co-PI on a NSF National Robotics Initiative (NRI) grant, and was awarded a NSF Graduate Research Fellowship in 2010. Dr. Duncan received a Ph.D. From Texas A&M University and B.S. in Computer Science from the Georgia Institute of Technology. For more information, please see: or
10/05/2018 Sisir Karumanchi
NASA Jet Propulsion Laboratory
Leveraging Proprioceptive Feedback for Mobile Manipulation

This talk highlights proprioceptive feedback as a means to do more with less sensing, less task specification and less a priori information. The motivating application is mobile manipulation in harsh environments with field-able robots.

Mainstream R&D in Robotics has focused on better representations to consolidate contextual information (deep nets, scene classifiers, world models). Such contextual understanding does lead to intelligent behaviors with better generalization. In contrast, this talk is about basic competence by way of simple behaviors (“does one thing and does it well”) and sequential composition of mixed feedback behaviors (exteroceptive interleaved with proprioceptive) that can complement each other.

This talk builds on practical lessons learned from the speaker’s past experience in creating fieldable systems where one has to work with imperfect sensors, imperfect controllers, imperfect motion planners and imperfect hardware. A key lesson learned is the notion that simple behaviors generalize better in the field. This talk postulates that proprioceptive feedback is effective because it is i) ego-centric (does not rely on localization) ii) often correlated with both task performance and control inputs. Specifically, we highlight force feedback behaviors and intermediate staging behaviors (e.g. bracing with one arm and lifting with other, moving a neck/torso for better camera alignment).

Speaker’s Bio: Sisir is a Robotics Technologist at NASA’s Jet Propulsion Lab, Caltech. He is a member of the Manipulation and Sampling Group that focuses on adaptive sampling strategies on Rovers. He was the software lead for the JPL team at the DARPA Robotics Challenge finals. Before joining JPL, Sisir was the manipulation lead within the MIT Team for the VRC and the DRC Trials phase of the DARPA Robotics Challenge program. Team RoboSimian finished fifth out of 23 teams at the DRC finals. Team MIT finished fourth at the DRC Trials and third during the VRC phase. Sisir completed his Ph.D. with the Australian Centre for Field Robotics at the Uni- versity of Sydney in 2010. During 2011-2014, he was a postdoc at the Massachusetts Institute of Technology. During his postdoc, he worked with Dr. Karl Iagnemma on semi-autonomous control of ground vehicles and in mobile manipulation with Prof. Seth Teller and Prof. Russ Tedrake.
10/12/2018 Magnus Egerstedt
Georgia Institute of Technology
Long Duration Autonomy and Constraint-Based Coordination of Multi-Robot Systems
Abstract: By now, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives in a distributed manner, such as assembling shapes or covering areas. But, the mapping from high-level tasks to these objectives is not particularly well understood. In this talk, we investigate this topic in the context of long duration autonomy, i.e., we consider teams of robots, deployed in an environment over a sustained period of time, that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding the tasks and safety constraints, as well as a detour into ecology as a way of understanding how persistent environmental monitoring, as a special instantiation of the long duration autonomy concept, can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth.

Speaker’s Bio: Dr. Magnus Egerstedt is the Steve W. Chaddick School Chair and Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received the M.S. degree in Engineering Physics and the Ph.D. degree in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, the B.A. degree in Philosophy from Stockholm University, and was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. Magnus Egerstedt is a Fellow of the IEEE and has received a number of teaching and research awards, including the Ragazzini Award from the American Automatic Control Council, the Outstanding Doctoral Advisor Award and the HKN Outstanding Teacher Award from Georgia Tech, and the Alumnus of the Year Award from the Royal Institute of Technology.
10/19/2018 Chelsea Finn
Google Brain/Stanford University
Building unsupervised, versatile agents with meta-learning
Abstract: Machine learning excels primarily in settings where an engineer can first reduce the problem to a particular function, and collect a substantial amount of labeled input-output pairs for that function. In drastic contrast, humans are capable of learning a range of versatile behaviors from streams of raw sensory data with minimal external instruction. How can we develop machines that learn more like the latter? In this talk, I will discuss recent work on enabling ML systems and robots to be versatile, learning behaviors and concepts from raw pixel observations with minimal supervision. In particular, I will show how we can use meta-learning to infer the objective for a new task from only a few positive examples, how algorithms can use large unlabeled datasets to learn representations for allow efficiently learning downstream tasks, and how we can apply meta reinforcement learning on a real robot to enable online adaptation in the face of novel environments.
10/26/2018 Samir Gadre
Virtual and Mixed Reality Interfaces for Human-Robot Interaction
Abstract: Virtual Reality (VR) and Mixed Reality (MR) are promising interfaces to facilitate productive human-robot interactions. We present recent VR and MR interfaces that allow users to naturally visualize and control robot motion. This talk focuses on the key technologies and architectures we use to build VR/MR interfaces, and how we use these technologies to create collaborative experiences. We discuss our application of VR/MR interfaces to active areas in robotics research such as robot programming, learning from demonstration, and symbol grounding.

Speaker’s Bio: Samir Gadre is a recent graduate of Brown University where he earned a B.S. in Computer Science. He completed his senior thesis, working on Mixed Reality interfaces to collect training data for learning from demonstration algorithms. Samir is interested in the intersections between computer vision, robotics, and human-robot interaction. He is passionate about the democratization of robotics.
11/02/2018 Jivko Sinapov
Tufts University
Symbol Grounding through Behavioral Exploration and Multisensory Perception: Solutions and Open Problems
Abstract: Solving the symbol grounding problem in a robotics setting requires the robot to connect internal representations of symbolic information to real world data from its sensory experience. The problem is especially important for language learning as a robot must have the means to represent symbols such as “red”, “soft”, “bigger than”, etc. not only in terms of other symbols but also in terms of its own perception of objects for which these symbols may be true or false. In this talk, I will present a general framework for symbol grounding in which a robot connects semantic descriptors of objects and their relationships to its multisensory experience produced when interacting with objects. The framework is inspired by research in cognitive and developmental psychology that studies how behavioral object exploration in infanthood is used by humans to learn grounded representations of objects and their affordances. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a “question” to the object, which is subsequently “answered” by the sensory stimuli produced during the execution of the behavior.
In the proposed framework, the robot interacts with objects using a diverse set of behaviors (e.g., grasping, lifting, looking) coupled with a variety of sensory modalities (e.g., vision, audio, haptics, etc.). I will present results from several large-scale experiments involving human-robot and robot-object interaction, which show that the framework enables robots to learn multisensory object models, as well as to ground the meaning of linguistic descriptors extracted through human-robot dialogue. For example, the word ``heavy’’ is automatically grounded in the robot’s haptic sensations when lifting an object, while the word ``red’’ is grounded in the robot’s visual input, without the need for a human expert to specify which sensory input is necessary for learning a particular word. The proposed framework is also evaluated in a service robotics object delivery task setting where the robot must efficiently identify whether a set of linguistic descriptors, e.g., “a red empty bottle”) apply to an object. Finally, I will conclude with a discussion on open problems in multisensory symbol grounding, which, if solved, could results in the large-scale deployment of such systems in real-world domains.

Speaker’s Bio: Jivko Sinapov received his Ph.D. in computer science and human-computer interaction from Iowa State University (ISU) in the Fall of 2013. While working toward his Ph.D. at ISU's Developmental Robotics Lab, he developed novel methods for behavioral object exploration and multi-modal perception. He went on to be a clinical assistant professor with the Texas Institute for Discovery, Education, and Science at UT Austin and a postdoctoral associate working with Peter Stone at the UTCS Artificial Intelligence lab. Jivko Sinapov joined Tufts University in the Fall of 2017 as the James Schmolze Assistant Professor in Computer Science. Jivko's research interests include cognitive and developmental robotics, computational perception, human-robot interaction, and reinforcement learning.
11/09/2018 Animesh Garg
Nvidia AI Research Lab / Stanford AI Lab
Towards Generalizable Autonomy in Robotics

Robotics and AI are experiencing radical growth, fueled by innovations in data-driven learning paradigms coupled with novel device design, in applications such as healthcare, manufacturing and service robotics.

Data-driven methods such as reinforcement learning circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense: training these models frequently takes weeks in addition to months of task-specific data-collection on physical systems. Further such ab initio methods often do not scale to complex sequential tasks. In contrast, biological agents can often learn faster not only through self-supervision but also through imitation. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify in reinforcement learning, control theoretic planning, semantic scene & video understanding, and design.

In this talk, I will discuss two aspects of Generalizable Imitation: Task Imitation, and Generalization in both Visual and Kinematic spaces. First, I will describe how we can move away from hand-designed finite state machines by unsupervised structure learning for complex multi-step sequential tasks. Then I will discuss techniques for robust policy learning to handle generalization across unseen dynamics. I will revisit task structure learning for task-level understanding generalizes across visual semantics. And lastly, I will present a method for generalization across task semantics with a single example with unseen task structure, topology or length. The algorithms and techniques introduced are applicable across domains in robotics; in this talk, I will exemplify these ideas through my work on medical and personal robotics.

Speaker’s Bio: Animesh Garg is a Senior Research Scientist in Nvidia AI Research Lab and a Research Scientist at Stanford AI Lab. Animesh received his Ph.D. from the University of California, Berkeley where he was a part of the Berkeley AI Research Group and spent 2 years as a Postdoctoral Researcher at Stanford AI Lab. Animesh works in the area of robot skill learning and his work sits at the interface of optimal control, machine learning, and computer vision methods for robotics applications. He has worked on data-driven Learning for autonomy and human-skill augmentation in surgical robotics and personal robots. His research has been recognized with Best Applications Paper Award at IEEE CASE, Best Video at Hamlyn Symposium on Surgical Robotics, and Best Paper Nomination at IEEE ICRA 2015. And his work has also featured in press outlets such as New York Times, UC Health, UC CITRIS News, and BBC Click.
11/16/2018 Günter Niemeyer
Disney Research
Robots, Disney, and Touch - Can we get closer to our robots?
Abstract: Robotics obviously has a long history, including at Disney. But touch has been one of the more challenging aspects. From peg-in-hole tasks and force control, to grasping and shaking hands, enabling our robots to interact is hard but critical. We need to endow them with a better sense (and act) of touch. I would like to review some of the systems at Disney and some of the related work, both inside and outside Disney. And both in telerobotics, interacting with a human operator, and in direct interactions, with a human partner. Indeed simultaneously controlling interaction forces and motion leads to the classic stability problems and performance trade-offs. Impedance control and passivity are standard and robust tools, using minimal assumptions, but can lead to conservative solutions. And often feel roboticy. So we ask ourselves: how should we build robots, what assumptions should we make, what controls and models are appropriate, and how do we create behaviors that make robots act and feel more natural? Can we get robots ready for up-close human interactions?

Speaker’s Bio: Günter Niemeyer is a senior research scientist at Disney Research, Los Angeles. His research examines physical human-robotic interactions and interaction dynamics, force sensitivity and feedback, teleoperation with and without communication delays, and haptic interfaces. He received MS and PhD degrees from the Massachusetts Institute of Technology (MIT) in the areas of adaptive robot control and bilateral teleoperation, introducing the concept of wave variables. He also held a postdoctoral research position at MIT developing surgical robotics. In 1997, he joined Intuitive Surgical Inc., where he helped create the da Vinci Minimally Invasive Surgical System. He was a member of the Stanford faculty from 2001-2009, directing the Telerobotics Lab. From 2009-2012 he worked with the PR2 personal robot at Willow garage. He joined Disney Research in 2012.
11/30/2018 Leslie Kaelbling
Doing for our robots what evolution did for us

We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

Joint work with: Tomas Lozano-Perez, Zi Wang, Caelan Garrett and a fearless group of summer robot students

Speaker’s Bio: Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founder of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot.
12/07/2018 Maxim Likhachev
Search-based Planning for High-dimensional Robotic Systems Using Ensembles of Solutions to Their Low-dimensional Abstractions
Abstract: Search-based Planning refers to planning by constructing a graph from systematic discretization of the state- and action-space of a robot and then employing a heuristic search to find an optimal path from the start to the goal vertex in this graph. This paradigm works well for low-dimensional robotic systems such as mobile robots and provides rigorous guarantees on solution quality. However, when it comes to planning for higher-dimensional robotic systems such as mobile manipulators, humanoids and vehicles driving at high-speed, Search-based Planning has been typically thought of as infeasible. In this talk, I will describe some of the research that my group has done into changing this thinking. In particular, I will focus on our recent findings into how Search-based Planning can be made feasible when planning for high-dimensional systems based on the idea that we can construct multiple lower-dimensional abstractions of such systems, solutions to which can effectively guide the overall planning process. To this end, I will describe Multi-Heuristic A*, an algorithm recently developed by my group, some of its extensions and its applications to a variety of high-dimensional planning and complex decision-making problems in Robotics.

Speaker’s Bio: Maxim Likhachev is an Associate Professor at Carnegie Mellon University, directing Search-based Planning Laboratory (SBPL). His group researches heuristic search, decision-making and planning algorithms, all with applications to the control of robotic systems including unmanned ground and aerial vehicles, mobile manipulation platforms, humanoids and multi-robot systems. Maxim obtained his Ph.D. in Computer Science from Carnegie Mellon University with a thesis called “Search-based Planning for Large Dynamic Environments.” Maxim has over 120 publications in top journals and conferences on AI and Robotics and numerous awards. His work on Anytime D* algorithm, an anytime planning algorithm for dynamic environments, has been awarded the title of Influential 10-year Paper at International Conference on Automated Planning and Scheduling (ICAPS) 2017, the top venue for research on planning and scheduling. Other awards include selection for 2010 DARPA Computer Science Study Panel that recognizes promising faculty in Computer Science, Best RSS paper award, being on a team that won 2007 DARPA Urban Challenge and on a team that won the Gold Edison award in 2013, and a number of other awards.

Details of previous Robotics Colloquia can be found here.