Robotics Colloquia

Robotics Colloquium

Talks by invited and local researchers on all aspects of control theory, stochastic estimation, machine learning and mechanical design as applied to dynamical systems in robotics. Researchers working at the intersection of these areas with biology and neuroscience will also be hosted. The colloquium is held Fridays at 2:30 pm in CSE 305.

Spring 2013
04/5/13 Dieter Fox
Cynthia Matuszek
PechaKucha 20x20 for Robotics abstract
04/12/13 No talk
04/19/13 Robotics Students & Staff PechaKucha-style Robotics Research Overviews abstract
04/26/13 Pete Wurman
Special Wednesday Colloquium, CSE 203
Coordinating Hundreds of Autonomous Vehicles in Warehouses
abstract
04/26/13 Matt Mason Learning to Use Simple Hands abstract
05/03/13 Nadia Shouraboura Canceled
05/10/13 No talk (ICRA)
05/17/13 Tom Daniel Control and Dynamics of Animal Flight: Reverse Engineering Nature's Robots abstract
05/24/13 Katherine Kuchenbecker The Value of Tactile Sensations in Haptics and Robotics abstract
05/31/13 Pieter Abbeel Machine Learning and Optimization for Robotics abstract
06/07/13 Nick Roy
Winter 2013
01/18/13 Robotics and State
Estimation Lab
Overview of RSE Lab Research
01/25/13 Joshua Smith Robotics Research in the Sensor Systems Group abstract
02/01/13 no talk  
02/08/13 Gaurav Sukhatme Persistent Autonomy at Sea abstract
02/15/13 Jiri Najemnik Sequence Optimization in Engineering, Artificial Intelligence and Biology abstract
02/22/13 no talk  
03/01/13 Richard Newcombe Beyond Point Clouds: Adventures in Real-time Dense SLAM abstract
03/08/13 Tom Erez Model-Based Optimization for Intelligent Robot Control abstract
03/15/13 Byron Boots Spectral Approaches to Learning Dynamical Systems abstract
Spring 2012
3/30/12 Andrea Thomaz Designing Learning Interactions for Robots abstract
4/6/12 Javier Movellan Towards a New Science of Learning abstract
4/13/12 Emanuel Todorov Automatic Synthesis of Complex Behaviors with Optimal Control abstract
4/20/12 Andrew Barto Autonomous Robot Acquisition of Transferable Skills abstract
4/27/12 Dieter Fox Grounding Natural Language in Robot Control Systems abstract
5/4/12 Allison Okamura Robot-Assisted Needle Steering abstract
5/11/12 Blake Hannaford Click the Scalpel -- Better Patient Outcomes by Advancing Robotics in Surgery abstract
5/18/12 no talk
5/25/12 Malcolm MacIver Robotic Electrolocation abstract
6/1/12 Drew Bagnell Imitation Learning, Inverse Optimal Control and Purposeful Prediction abstract
Detailed schedule
04/05/13 Dieter Fox
Cynthia Matuszek
PechaKucha 20x20 for Robotics

PechaKucha 20x20 is a new approach to giving presentations. From the FAQ: "PechaKucha 20x20 is a simple presentation format where you show 20 images, each for 20 seconds. The images advance automatically and you talk along to the images." While the presentation format was originally developed for architecture presentations, it has been successfully applied to fields as diverse as art, cooking, design, and journalism. This talk will give an overview of the format and some examples, in the interest of stimulating discussion about the role of such a format in technology.

04/19/13 Robotics Students & Staff PechaKucha-style Robotics Research Overviews

In this session, eight interested (and interesting!) robotics researchers will use one of the popular "Flash presentation" styles to spend a few minutes covering information about their own work, related work that they think is worth knowing about, or any other robotics-related topic they wish.

04/24/13 Pete Wurman Coordinating Hundreds of Autonomous Vehicles in Warehouses
Special Wednesday Colloquium, CSE 203

Kiva's mobile fulfillment system blends techniques from AI, Controls Systems, Machine Learning, Operations Research and other engineering disciplines into the world's largest mobile robotic platform. Kiva uses hundreds of mobile robots to carry inventory shelves around distribution centers for customers like Staples, Walgreens, and The Gap. Kiva currently has equipment in over 30 warehouses in three countries. This talk will describe the application domain and the business solution, and some of the practical engineering problems that Kiva has solved along the way.

04/26/13 Matt Mason Learning to Use Simple Hands

We often assume that general-purpose robot hands should be complex, perhaps even as complex as human hands. Yet humans can do a lot even when using tongs. This talk describes ongoing work with simple hands - hands inspired by very simple tools like tongs. We explore a robot's ability to grasp, recognize, localize, place and even manipulate objects in the hand, with a very simple hand. The perception and planning algorithms are based on learned models, which are in turn based on thousands of experiments with the objects in question.

Dr. Matthew T. Mason earned the BS, MS, and PhD degrees in Computer Science and Artificial Intelligence at MIT, finishing his PhD in 1982. Since that time he has been on the faculty at Carnegie Mellon University, where he is presently Professor of Robotics and Computer Science, and Director of the Robotics Institute. His prior work includes force control, automated assembly planning, mechanics of pushing and grasping, automated parts orienting and feeding, and mobile robotics. He is co-author of "Robot Hands and the Mechanics of Manipulation" (MIT Press 1985), co-editor of "Robot Motion: Planning and Control" (MIT Press 1982), and author of "Mechanics of Robotic Manipulation" (MIT Press 2001). He is a Fellow of the AAAI, and a Fellow of the IEEE. He is a winner of the System Development Foundation Prize and the IEEE Robotics and Automation Society's Pioneer Award.

05/17/13 Tom Daniel Control and Dynamics of Animal Flight: Reverse Engineering Nature's Robots

All living creatures process information from multiple sensory modalities and, in turn, control movement through multiple actuators. They do so to navigate through spatially and temporally complex environments with amazing agility. Among the most successful of nature's robots are insects, occupying every major habitat. This talk will review sensorimotor control of movement in flying insects, with a focus on where the functional role of sensing and actuation become blurred.

Dr. Tom Daniel holds the Joan and Richard Komen Endowed Chair and has appointments in the Department of Biology, Computer Science & Engineering, the Program on Neurobiology and Behavior Faculty at the University of Washington. He is currently the Interim Director of the National Science Foundation Center for Sensorimotor Neural Engineering (CSNE). He has served as a UW faculty member since his initial appointment in 1984. He was the founding chair of the Department of Biology at the University of Washington (2000-2008). Prior to the UW, he was the Myron A. Bantrell Postdoctoral Fellow in Engineering Sciences at the California Institute of Technology. He received his PhD degree from Duke University. He was awarded MacArthur Fellow in 1996, the University of Washington Distinguished Teaching Award, and the University of Washington Distinguished Graduate Mentor Award. He is on the editorial boards of the Science Magazine, Proceedings of the Royal Society (Biology Letters). He is also on the Board of Directors and the Scientific Advisory Board of the Allen Institute of Brain Science, and the Scientific Advisory Board for the NSF Mathematical Biosciences Institutes. His research programs focus on biomechanics, neurobiology, and sensory systems, addressing questions about the physics, engineering and neural control of movement in biological systems.

05/24/13 Katherine Kuchenbecker The Value of Tactile Sensations in Haptics and Robotics

Although physical interaction with the world is at the core of human experience, few computer and machine interfaces provide the operator with high-fidelity touch feedback, limiting their usability. Similarly, autonomous robots rarely take advantage of touch perception and thus struggle to match the manipulation capabilities of humans. My long-term research goal is to leverage scientific knowledge about the sense of touch to engineer haptic interfaces and robotic systems that increase the range and quality of tasks humans can accomplish. This talk will describe my group's three main research thrusts: haptic texture rendering, touch feedback for robotic surgery, and touch perception for autonomous robots. First, most haptic interfaces struggle to mimic the feel of a tool dragging along a surface due to both software and hardware limitations. We pioneered a data-driven method of capturing and recreating the high-bandwidth vibrations that characterize tool-mediated interactions with real textured surfaces. Second, although commercial robotic surgery systems are approved for use on human patients, they provide the surgeon with little to no haptic feedback. We have invented, refined, and studied a practical method for giving the surgeon realistic tactile feedback of instrument vibrations during robotic surgery. Third, household robots will need to know how to grasp and manipulate a wide variety of objects. We have invented a set of methods that enable a robot equipped with commercial tactile sensors to delicately and firmly grasp real-world objects and perceive their haptic properties. Our work in all three of these areas has been principally enabled by a single insight: although less studied than kinesthetic cues, tactile sensations convey much of the richness of physical interactions.

Dr. Katherine J. Kuchenbecker is the Skirkanich Assistant Professor of Innovation in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. Her research centers on the design and control of haptic interfaces for applications such as robot-assisted surgery, medical simulation, stroke rehabilitation, and personal computing. She directs the Penn Haptics Group, which is part of the General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory. She has won several awards for her research, including an NSF CAREER Award in 2009, Popular Science Brilliant 10 in 2010, and the IEEE Robotics and Automation Society Academic Early Career Award in 2012. Prior to becoming a professor, she completed a postdoctoral fellowship at the Johns Hopkins University, and she earned her Ph.D. in Mechanical Engineering at Stanford University in 2006.

05/31/13 Pieter Abbeel Machine Learning and Optimization for Robotics

Robots are typically far less capable in autonomous mode than in teleoperated mode. The few exceptions tend to stem from long days (and more often weeks, or even years) of expert engineering for a specific robot and its operating environment. Current control methodology is quite slow and labor intensive. I believe advances in machine learning and optimization have the potential to revolutionize robotics. First I will present new machine learning techniques we have developed that are tailored to robotics. I will describe in depth "Apprenticeship learning," a new approach to high-performance robot control based on learning for control from ensembles of expert human demonstrations. Our initial work in apprenticeship learning has enabled the most advanced helicopter aerobatics to-date, including maneuvers such as chaos, tic-tocs, and auto-rotation landings which only exceptional expert human pilots can fly. Our most recent work in apprenticeship learning is inspired by challenges in surgical robotics. We are studying how a robot could learn to perform challenging robotic manipulation tasks, such as knot-tying. Then I will describe our recent advances in optimization based planning — both in state space and in belief space. Finally, I will briefly highlight our recent work on enabling robots to learn on their own through non-parametric model-based reinforcement learning.

Dr. Pieter Abbeel received a BS/MS in Electrical Engineering from KU Leuven (Belgium) and received his Ph.D. degree in Computer Science from Stanford University in 2008. He joined the faculty at UC Berkeley in Fall 2008, with an appointment in the Department of Electrical Engineering and Computer Sciences. He has won various awards, including best paper awards at ICML and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the Okawa Foundation award, the TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award. He has developed apprenticeship learning algorithms which have enabled advanced helicopter aerobatics, including maneuvers such as tic-tocs, chaos and auto-rotation, which only exceptional human pilots can perform. His group has also enabled the first end-to-end completion of reliably picking up a crumpled laundry article and folding it. His work has been featured in many popular press outlets, including BBC, New York Times, MIT Technology Review, Discovery Channel, SmartPlanet and Wired. His current research focuses on robotics and machine learning with a particular emphasis on challenges in personal robotics, surgical robotics and connectomics.

06/07/13 Nick Roy

Past quarters
01/25/13 Joshua Smith Robotics Research in the Sensor Systems Group

After providing a brief overview of the Sensor Systems group, I will present our recent work in robotics. I will introduce pretouch sensing, our term for in-hand sensing that is shorter range than vision but longer range than tactile sensing. I will review Electric Field Pretouch sensing, introduce Seashell Effect Pretouch, and discuss strategies for using pretouch sensing in the context of robotic manipulation. As an active sensing modality, pretouch requires a choice of "next view." Since the robot hand is used for both sensing and actuation, pretouch-enabled grasping also requires us to consider an exploration/execution tradeoff. Finally, I will outline several new robotics projects that are underway.

02/08/13 Gaurav Sukhatme Persistent Autonomy at Sea

Underwater robotics is undergoing a transformation. Recent advances in AI and machine learning are enabling a new generation of underwater robots to make intelligent decisions (where to sample ? how to navigate ?) by reasoning about their environment (what is the shipping and water forecast ?). At USC, we are engaged in a long-term effort to develop persistent, autonomous underwater explorer robots. In this talk, I will give an overview of some of our recent results focusing on two problems in adaptive sampling: underwater change detection and biological sampling. I will also present our recent work on hazard avoidance, allowing robots to operate in regions where there is ship traffic. Bio: Gaurav S. Sukhatme is a Professor of Computer Science (joint appointment in Electrical Engineering) at the University of Southern California (USC). He is currently serving as the Chairman of the Computer Science department. His recent research is in networked robots.

Dr. Sukhatme has served as PI on numerous federal grants. He is Fellow of the IEEE and a recipient of the NSF CAREER award and the Okawa foundation research award. He is one of the founders of the RSS conference and has served as program chair of all three leading robotics conferences (ICRA, IROS and RSS). He is the Editor-in-Chief of the Springer journal Autonomous Robots.

02/15/13 Jiri Najemnik Sequence Optimization in Engineering, Artificial Intelligence and Biology

Part 1: Linear equivalent of dynamic programming. We show that Bellman equation for dynamic programming can be replaced by just as simple linear equation for the so-called optimal ranking function, which encodes the optimal sequence via its greedy maximization. This optimal ranking function represents Gibbs distribution which minimizes the expected sequence cost given the entropy level (set by a temperature parameter). Each temperature level gives rise to a linearly computable optimal ranking function.

Part 2: Predictive state representation with entropy level constraint. Building on part 1, we show that if one specifies the entropy level of the input's stochastic process, then its Bayesian inference for the purposes of optimal learning can be simplified greatly. We conceptualize an idealized nervous system that is an online input-output transformer of binary vectors representing the neurons' firing states, and we ask how one would adjust the input-output mapping optimally to minimize the expected cost. We will argue that predictive state representations could be employed by a nervous system.

Part 3: Evidence of optimal predictive control of human eyes. We present evidence of optimal-like predictive control of human eyes in visual search for a small camouflaged target. To a striking degree, human searchers behave as if maintaining a map of beliefs (represented as probabilities) about the target location, updating their beliefs with visual data obtained on each fixation using the Bayes Rule, and moving eyes online in order to maximize the expected information gain. Some of these results were published in Nature.

02/15/13 Richard Newcombe Beyond Point Clouds: Adventures in Real-time Dense SLAM

One clear direction for the near future of robotics makes use of the ability to build and keep up to date geometric models of the environment. In this talk I will present an overview of my work in monocular real-time dense surface SLAM (simultaneous localisation and mapping) which aims to provide such geometric models using only a single passive colour or depth camera and without further specific hardware or infrastructure requirements. In contrast to previous SLAM systems which utilised sparser point cloud scene representations, the systems I will present, which include KinectFusion and DTAM, simultaneously estimate a camera pose together with a full dense surface estimate of the scene. Such dense surface mapping results in physically predictive models that are more useful for geometry aware augmented reality and robotics applications. Crucially, representing the scene using surfaces enables elegant dense image tracking techniques to be used in estimating the camera pose, resulting in robustness to high speed agile camera motion. I'll provide a real-time demonstration of these techniques which are useful not only in robust camera tracking, but also in object tracking in general. Finally, I'll outline our latest work in moving beyond surface estimation to incorporating objects into the dense SLAM pipeline.

02/15/13 Tom Erez Model-Based Optimization for Intelligent Robot Control

Science-fiction robots can perform any task humans do and more. In reality, however, today's articulated robots are disappointingly limited in their motor skills. Current planning and control algorithms cannot provide the robot with the capacity for intelligent motor behavior - instead, control engineers must manually specify the motions of every task. This approach results in jerky motions (popularly stereotyped as “moving like a robot”) that cannot cope with unexpected changes. I study control methods that automate the job of the controls engineer. I give the robot only a cost function that encodes the task in high-level terms: move forward, remain upright, bring an object, etc. The robot uses a model of itself and its surroundings to optimize its behavior, finding a solution that minimizes the future cost. This optimization-based approach can be applied to different problems, and in every case the robot alone decides how to solve the task. Re-optimizing in real time allows the robot to deal with unexpected deviations from the plan, generating robust and creative behavior that adapts to modeling errors and dynamic environments. In this talk, I will present the theoretic and algorithmic aspects needed to control articulated robots using model-based optimization. I will discuss how machine learning can be used to create better controllers, and share some of my work on trajectory optimization.

A preview of some of the work discussed in this talk can be seen here: https://dl.dropbox.com/u/57029/MedleyJan13.mp4 [a lower-quality version is also available on youtube: http://www.youtube.com/watch?v=t4JdSklL8w0 ]

02/15/13 Byron Boots Spectral Approaches to Learning Dynamical Systems

If we hope to build an intelligent agent, we have to solve (at least!) the following problem: by watching an incoming stream of sensor data, hypothesize an external world model which explains that data. For this purpose, an appealing model representation is a dynamical system. Sometimes we can use extensive domain knowledge to write down a dynamical system, however, for many domains, specifying a model by hand can be a time consuming process. This motivates an alternative approach: *learning* a dynamical system directly from sensor data. A popular assumption is that observations are generated from a hidden sequence of latent variables, but learning such a model directly from sensor data can be tricky. To discover the right latent state representation and model parameters, we must solve difficult temporal and structural credit assignment problems, often leading to a search space with a host of (bad) local optima. In this talk, I will present a very different approach. I will discuss how to model a dynamical system's belief space as a set of *predictions* of observable quantities. These so-called Predictive State Representations (PSRs) are very expressive and subsume popular latent variable models including Kalman filters and input-output hidden Markov models. One of the primary advantages of PSRs over latent variable formulations of dynamical systems is that model parameters can be estimated directly from moments of observed data using a recently discovered class of spectral learning algorithms. Unlike the popular EM algorithm, spectral learning algorithms are statistically consistent, computationally efficient, and easy to implement using established matrix-algebra techniques. The result is a powerful framework for learning dynamical system models directly from data.

3/08/12 Andrea Thomaz Designing Learning Interactions for Robots

In this talk I present recent work from the Socially Intelligent Machines Lab at Georgia Tech. One of the focuses of our lab is on Socially Guided Machine Learning, building robot systems that can learn from everyday human teachers. We look at standard Machine Learning interactions and redesign interfaces and algorithms to support the collection of learning input from naive humans. This talk starts with an initial investigation comparing self and social learning which motivates our recent work on Active Learning for robots. Then, I will present results from a study of robot active learning, which motivates two challenges: getting interaction timing right, and asking good questions. To address the first challenge we are building computational models of reciprocal social interactions. And to address the second challenge we are developing algorithms for generating Active Learning queries in embodied learning tasks.

Dr. Andrea L. Thomaz is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. She directs the Socially Intelligent Machines lab, which is affiliated with the Robotics and Intelligent Machines (RIM) Center and with the Graphics Visualization and Usability (GVU) Center. She earned a B.S. in Electrical and Computer Engineering from the University of Texas at Austin in 1999, and Sc.M. and Ph.D. degrees from MIT in 2002 and 2006. Dr. Thomaz is published in the areas of Artificial Intelligence, Robotics, Human-Robot Interaction, and Human-Computer Interaction. She received an ONR Young Investigator Award in 2008, and an NSF CAREER award in 2010. Her work has been featured on the front page of the New York Times, and in 2009 she was named one of MIT Technology Review’s TR 35.

4/6/12 Javier Movellan Towards a New Science of Learning

Advances in machine learning, machine perception, neuroscience, and control theory are making possible the emergence of a new science of learning. This discipline could help us understand the role of learning in the development of human intelligence, and to create machines that can learn from experience and that can accelerate human learning and education. I will propose that key to this emerging science is the commitment to computational analysis, for which the framework of probability theory and stochastic optimal control is particularly well suited, and to the testing of theories using physical real time robotic implementations. I will describe our efforts to help understand learning and development from a computational point of view. This includes development of machine perception primitives for social interaction, development of social robots to enrich early childhood education, computational analysis of rich databases of early social behavior, and development of sophisticated humanoid robots to understand the emergence of sensory-motor intelligence in infants.

4/13/12 Emanuel Todorov Automatic Synthesis of Complex Behaviors with Optimal Control

In this talk I will show videos of complex motor behaviors synthesized automatically using new optimal control methods, and explain how these methods work. The behaviors include getting up from an arbitrary pose on the ground, walking, hopping, swimming, kicking, climbing, hand-stands, and cooperative actions. The synthesis methods fall in two categories. The first is online trajectory optimization or model-predictive control (MPC). The idea is to optimize the movement trajectory at every step of the estimation-control loop up to some time horizon (in our case about half a second), execute only the beginning portion of the trajectory, and repeat the optimization at the next time step (say 10 msec later). This approach has been used extensively in domains such as chemical process control where the dynamics are sufficiently slow and smooth to make online optimization possible. We have now developed a number of algorithmic improvements, allowing us to apply MPC to robotic systems. This requires a fast physics engine (for computing derivatives via finite differencing) which we have also developed. The second method is based on the realization that most movements performed on land are made for the purpose of establishing contact with the environment, and exerting contact forces. This suggests that contact events should not be treated as side-effects of multi-joint kinematics and dynamics, but rather as explicit decision variables. We have developed a method where the optimizer directly specifies the desired contact events, using continuous decision variables, and at the same time optimizes the movement trajectory in a way consistent with the specified contact events. This makes it possible to optimize movement trajectories with many contact events, without need for manual scripting, motion capture or fortuitous choice of "features".

4/20/12 Andrew Barto Autonomous Robot Acquisition of Transferable Skills

A central goal of artificial intelligence is the design of agents that can learn to achieve increasingly complex behavior over time. An important type of cumulative learning is the acquisition of procedural knowledge in the form of skills, allowing an agent to abstract away from low-level motor control and plan and learn at a higher level, and thus progressively improving its problem solving abilities and creating further opportunities for learning. I describe a robot system that learns to sequence innate controllers to solve a task, and then extracts components of that solution as transferable skills. The resulting skills improve the robot’s ability to learn to solve a second task. This system was developed by Dr. George Konidaris, who received the Ph.D. from the University of Massachusetts Amherst in 2010 and is currently a Postdoctoral Associate in the Learning and Intelligent Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory.

4/27/12 Dieter Fox Grounding Natural Language in Robot Control Systems

Robots are becoming more and more capable at reasoning about people, objects, and activities in their environments. The ability to extract high-level semantic information from sensor data provides new opportunities for human-robot interaction. One such opportunity is to explore interacting with robots via natural language. In this talk I will present our preliminary work toward enabling robots to interpret, or ground, natural language commands in robot control systems. We build on techniques developed by the semantic natural language processing community on learning grammars that parse natural language input to logic-based semantic meaning. I will demonstrate early results in two application domains: First, learning to follow natural language directions through indoor environments; and, second, learning to ground (simple) object attributes via weakly supervised training. Joint work with Luke Zettlemoyer, Cynthia Matuszek, Nicholas Fitzgerald, and Liefeng Bo. Support provided by Intel ISTC-PC, NSF, ARL, and ONR.

5/4/12 Allison Okamura Robot-Assisted Needle Steering

Robot-assisted needle steering is a promising technique to improve the effectiveness of needle-based medical procedures by allowing redirection of a needle's path within tissue. Our robot employs a tip-based steering technique, in which the asymmetric tips of long, thin, flexible needles develop tip forces orthogonal to the needle shaft due to interaction with surrounding tissue. The robot steers a needle though two input degrees of freedom, insertion along and rotation about the needle shaft, in order to achieve six-degree-of-freedom positioning of the needle tip. A closed-loop system for asymmetric-tip needle steering was developed, including devices, models and simulations, path planners, controllers, and integration with medical imaging. I will present results from testing needle steering in artificial and biological tissues, and discuss ongoing work toward clinical applications. This project is a collaboration between researchers at Johns Hopkins University, UC Berkeley, and Stanford University.

Dr. Allison M. Okamura received the BS degree from the University of California at Berkeley in 1994, and the MS and PhD degrees from Stanford University in 1996 and 2000, respectively, all in mechanical engineering. She is currently Associate Professor in the mechanical engineering department at Stanford University. She was previously Professor and Vice Chair of mechanical engineering at Johns Hopkins University. She has been an associate editor of the IEEE Transactions on Haptics, an editor of the IEEE International Conference on Robotics and Automation Conference Editorial Board, and co-chair of the IEEE Haptics Symposium. Her awards include the 2009 IEEE Technical Committee on Haptics Early Career Award, the 2005 IEEE Robotics and Automation Society Early Academic Career Award, and the 2004 NSF CAREER Award. She is an IEEE Fellow. Her interests include haptics, teleoperation, virtual environments and simulators, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. For more information about our work, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website: http://charm.stanford.edu.

5/11/12 Blake Hannaford Click the Scalpel -- Better Patient Outcomes by Advancing Robotics in Surgery Surgery is a demanding unstructured physical manipulation task involving highly trained humans, advanced tools, networked information systems, and uncertainty. This talk will review engineering and scientific research at the University of Washington Biorobotics Lab, aimed at better care of patients, including remote patients in extreme environments. The Raven interoperable robot surgery research system is a telemanipulation system for exploration and training in surgical robotics. We are currently near completion of seven "Raven-II" systems which will be deployed at leading surgical robotics research centers to create an interoperable network of testbeds. Highly effective and safe surgical teleoperation systems of the future will provide high quality haptic feedback. Research in systems theory and human perception addressing that goal will also be introduced.

Dr. Blake Hannaford, Ph.D., is Professor of Electrical Engineering, Adjunct Professor of Bioengineering, Mechanical Engineering, and Surgery at the University of Washington. He received the B.S. degree in Engineering and Applied Science from Yale University in 1977, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of California, Berkeley, in 1982 and 1985 respectively. Before graduate study, he held engineering positions in digital hardware and software design, office automation, and medical image processing. At Berkeley he pursued thesis research in multiple target tracking in medical images and the control of time-optimal voluntary human movement. From 1986 to 1989 he worked on the remote control of robot manipulators in the Man-Machine Systems Group in the Automated Systems Section of the NASA Jet Propulsion Laboratory, Caltech. He supervised that group from 1988 to 1989. Since September 1989, he has been at the University of Washington in Seattle, where he has been Professor of Electrical Engineering since 1997, and served as Associate Chair for Education from 1999 to 2001. He was awarded the National Science Foundation's Presidential Young Investigator Award and the Early Career Achievement Award from the IEEE Engineering in Medicine and Biology Society and is an IEEE Fellow. His currently active interests include haptic displays on the Internet, and surgical robotics. He has consulted on robotic surgical devices with the Food and Drug Administration Panel on surgical devices.

5/25/12 Malcolm MacIver Robotic Electrolocation Electrolocation is used by the weakly electric fish of South America and Africa to navigate and hunt in murky water where vision is ineffective. These fish generate an AC electric field that is perturbed by objects nearby that differ in impedance from the water. Electroreceptors covering the body of the fish report the amplitude and phase of the local field. The animal decodes electric field perturbations into information about its surroundings. Electrolocation is fundamentally divergent from optical vision (and other imaging methods) that create projective images of 3D space. Current electrolocation methods are also quite different from electrical impedance tomography. We will describe current electrolocation technology, and progress on development of a propulsion system inspired by electric fish to provide the precise movement capabilities that this short-range sensing approach requires.

Dr. Malcolm MacIver is Associate Professor at Northwestern University with joint appointments in the Mechanical Engineering and Biomedical Engineering departments. He is interested in the neural and mechanical basis of animal behavior, evolution, and the implications of the close coupling of movement with gathering information for our understanding of intelligence and consciousness. He also develops immersive art installations that have been exhibited internationally.

6/1/12 Drew Bagnell Imitation Learning, Inverse Optimal Control and Purposeful Prediction

Programming robots is hard. While demonstrating a desired behavior may be easy, designing a system that behaves this way is often difficult, time consuming, and ultimately expensive. Machine learning promises to enable "programming by demonstration" for developing high-performance robotic systems. Unfortunately, many approaches that utilize the classical tools of supervised learning fail to meet the needs of imitation learning. I'll discuss the problems that result from ignoring the effect of actions influencing the world, and I'll highlight simple "reduction- based" approaches that, both in theory and in practice, mitigate these problems. I'll demonstrate the resulting approach on the development of reactive controllers for cluttered UAV flight and for video game systems. Additionally, robotic systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to poor and myopic performance. While planners have demonstrated dramatic success in applications ranging from legged locomotion to outdoor unstructured navigation, such algorithms rely on fully specified cost functions that map sensor readings and environment models to a scalar cost. Such cost functions are usually manually designed and programmed. Recently, our group has developed a set of techniques that learn these functions from human demonstration by applying an Inverse Optimal Control (IOC) approach to find a cost function for which planned behavior mimics an expert's demonstration. These approaches shed new light on the intimate connections between probabilistic inference and optimal control. I'll consider case studies in activity forecasting of drivers and pedestrians as well as the imitation learning of robotic locomotion and rough-terrain navigation. These case-studies highlight key challenges in applying the algorithms in practical settings. J. Andrew Bagnell is an Associate Professor with the Robotics Institute, the National Robotics Engineering Center and the Machine Learning Department at Carnegie Mellon University. His research centers on the theory and practice of machine learning for decision making and robotics.

Dr. Bagnell directs the Learning, AI, and Robotics Laboratory (LAIRLab) within the Robotics Institute. Dr. Bagnell serves as the director of the Robotics Institute Summer Scholars program, a summer research experience in robotics for undergraduates throughout the world. Dr. Bagnell and his group's research has won awards in both the robotics and machine learning communities including at the International Conference on Machine Learning, Robotics Science and Systems, and the International Conference on Robotics and Automation. Dr. Bagnell's current projects focus on machine learning for dexterous manipulation, decision making under uncertainty, ground and aerial vehicle control, and robot perception. Prior to joining the faculty, Prof. Bagnell received his doctorate at Carnegie Mellon in 2004 with a National Science Foundation Graduate Fellowship and completed undergraduate studies with highest honors in electrical engineering at the University of Florida.

ing a model by hand can be a time consuming process. This motivates an alternative approach: *learning* a dynamical system directly from sensor data. A popular assumption is that observations are generated from a hidden sequence of latent variables, but learning such a model directly from sensor data can be tricky. To discover the right latent state representation and model parameters, we must solve difficult temporal and structural credit assignment problems, often leading to a search space with a host of (bad) local optima. In this talk, I will present a very different approach. I will discuss how to model a dynamical system's belief space as a set of *predictions* of observable quantities. These so-called Predictive State Representations (PSRs) are very expressive and subsume popular latent variable models including Kalman filters and input-output hidden Markov models. One of the primary advantages of PSRs over latent variable formulations of dynamical systems is that model parameters can be estimated directly from moments of observed data using a recently discovered class of spectral learning algorithms. Unlike the popular EM algorithm, spectral learning algorithms are statistically consistent, computationally efficient, and easy to implement using established matrix-algebra techniques. The result is a powerful framework for learning dynamical system models directly from data. 3/08/12 Andrea Thomaz Designing Learning Interactions for Robots

In this talk I present recent work from the Socially Intelligent Machines Lab at Georgia Tech. One of the focuses of our lab is on Socially Guided Machine Learning, building robot systems that can learn from everyday human teachers. We look at standard Machine Learning interactions and redesign interfaces and algorithms to support the collection of learning input from naive humans. This talk starts with an initial investigation comparing self and social learning which motivates our recent work on Active Learning for robots. Then, I will present results from a study of robot active learning, which motivates two challenges: getting interaction timing right, and asking good questions. To address the first challenge we are building computational models of reciprocal social interactions. And to address the second challenge we are developing algorithms for generating Active Learning queries in embodied learning tasks.

Dr. Andrea L. Thomaz is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. She directs the Socially Intelligent Machines lab, which is affiliated with the Robotics and Intelligent Machines (RIM) Center and with the Graphics Visualization and Usability (GVU) Center. She earned a B.S. in Electrical and Computer Engineering from the University of Texas at Austin in 1999, and Sc.M. and Ph.D. degrees from MIT in 2002 and 2006. Dr. Thomaz is published in the areas of Artificial Intelligence, Robotics, Human-Robot Interaction, and Human-Computer Interaction. She received an ONR Young Investigator Award in 2008, and an NSF CAREER award in 2010. Her work has been featured on the front page of the New York Times, and in 2009 she was named one of MIT Technology Review’s TR 35.

4/6/12 Javier Movellan Towards a New Science of Learning

Advances in machine learning, machine perception, neuroscience, and control theory are making possible the emergence of a new science of learning. This discipline could help us understand the role of learning in the development of human intelligence, and to create machines that can learn from experience and that can accelerate human learning and education. I will propose that key to this emerging science is the commitment to computational analysis, for which the framework of probability theory and stochastic optimal control is particularly well suited, and to the testing of theories using physical real time robotic implementations. I will describe our efforts to help understand learning and development from a computational point of view. This includes development of machine perception primitives for social interaction, development of social robots to enrich early childhood education, computational analysis of rich databases of early social behavior, and development of sophisticated humanoid robots to understand the emergence of sensory-motor intelligence in infants.

4/13/12 Emanuel Todorov Automatic Synthesis of Complex Behaviors with Optimal Control

In this talk I will show videos of complex motor behaviors synthesized automatically using new optimal control methods, and explain how these methods work. The behaviors include getting up from an arbitrary pose on the ground, walking, hopping, swimming, kicking, climbing, hand-stands, and cooperative actions. The synthesis methods fall in two categories. The first is online trajectory optimization or model-predictive control (MPC). The idea is to optimize the movement trajectory at every step of the estimation-control loop up to some time horizon (in our case about half a second), execute only the beginning portion of the trajectory, and repeat the optimization at the next time step (say 10 msec later). This approach has been used extensively in domains such as chemical process control where the dynamics are sufficiently slow and smooth to make online optimization possible. We have now developed a number of algorithmic improvements, allowing us to apply MPC to robotic systems. This requires a fast physics engine (for computing derivatives via finite differencing) which we have also developed. The second method is based on the realization that most movements performed on land are made for the purpose of establishing contact with the environment, and exerting contact forces. This suggests that contact events should not be treated as side-effects of multi-joint kinematics and dynamics, but rather as explicit decision variables. We have developed a method where the optimizer directly specifies the desired contact events, using continuous decision variables, and at the same time optimizes the movement trajectory in a way consistent with the specified contact events. This makes it possible to optimize movement trajectories with many contact events, without need for manual scripting, motion capture or fortuitous choice of "features".

4/20/12 Andrew Barto Autonomous Robot Acquisition of Transferable Skills

A central goal of artificial intelligence is the design of agents that can learn to achieve increasingly complex behavior over time. An important type of cumulative learning is the acquisition of procedural knowledge in the form of skills, allowing an agent to abstract away from low-level motor control and plan and learn at a higher level, and thus progressively improving its problem solving abilities and creating further opportunities for learning. I describe a robot system that learns to sequence innate controllers to solve a task, and then extracts components of that solution as transferable skills. The resulting skills improve the robot’s ability to learn to solve a second task. This system was developed by Dr. George Konidaris, who received the Ph.D. from the University of Massachusetts Amherst in 2010 and is currently a Postdoctoral Associate in the Learning and Intelligent Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory.

4/27/12 Dieter Fox Grounding Natural Language in Robot Control Systems

Robots are becoming more and more capable at reasoning about people, objects, and activities in their environments. The ability to extract high-level semantic information from sensor data provides new opportunities for human-robot interaction. One such opportunity is to explore interacting with robots via natural language. In this talk I will present our preliminary work toward enabling robots to interpret, or ground, natural language commands in robot control systems. We build on techniques developed by the semantic natural language processing community on learning grammars that parse natural language input to logic-based semantic meaning. I will demonstrate early results in two application domains: First, learning to follow natural language directions through indoor environments; and, second, learning to ground (simple) object attributes via weakly supervised training. Joint work with Luke Zettlemoyer, Cynthia Matuszek, Nicholas Fitzgerald, and Liefeng Bo. Support provided by Intel ISTC-PC, NSF, ARL, and ONR.

5/4/12 Allison Okamura Robot-Assisted Needle Steering

Robot-assisted needle steering is a promising technique to improve the effectiveness of needle-based medical procedures by allowing redirection of a needle's path within tissue. Our robot employs a tip-based steering technique, in which the asymmetric tips of long, thin, flexible needles develop tip forces orthogonal to the needle shaft due to interaction with surrounding tissue. The robot steers a needle though two input degrees of freedom, insertion along and rotation about the needle shaft, in order to achieve six-degree-of-freedom positioning of the needle tip. A closed-loop system for asymmetric-tip needle steering was developed, including devices, models and simulations, path planners, controllers, and integration with medical imaging. I will present results from testing needle steering in artificial and biological tissues, and discuss ongoing work toward clinical applications. This project is a collaboration between researchers at Johns Hopkins University, UC Berkeley, and Stanford University.

Dr. Allison M. Okamura received the BS degree from the University of California at Berkeley in 1994, and the MS and PhD degrees from Stanford University in 1996 and 2000, respectively, all in mechanical engineering. She is currently Associate Professor in the mechanical engineering department at Stanford University. She was previously Professor and Vice Chair of mechanical engineering at Johns Hopkins University. She has been an associate editor of the IEEE Transactions on Haptics, an editor of the IEEE International Conference on Robotics and Automation Conference Editorial Board, and co-chair of the IEEE Haptics Symposium. Her awards include the 2009 IEEE Technical Committee on Haptics Early Career Award, the 2005 IEEE Robotics and Automation Society Early Academic Career Award, and the 2004 NSF CAREER Award. She is an IEEE Fellow. Her interests include haptics, teleoperation, virtual environments and simulators, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. For more information about our work, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website: http://charm.stanford.edu.

5/11/12 Blake Hannaford Click the Scalpel -- Better Patient Outcomes by Advancing Robotics in Surgery Surgery is a demanding unstructured physical manipulation task involving highly trained humans, advanced tools, networked information systems, and uncertainty. This talk will review engineering and scientific research at the University of Washington Biorobotics Lab, aimed at better care of patients, including remote patients in extreme environments. The Raven interoperable robot surgery research system is a telemanipulation system for exploration and training in surgical robotics. We are currently near completion of seven "Raven-II" systems which will be deployed at leading surgical robotics research centers to create an interoperable network of testbeds. Highly effective and safe surgical teleoperation systems of the future will provide high quality haptic feedback. Research in systems theory and human perception addressing that goal will also be introduced.

Dr. Blake Hannaford, Ph.D., is Professor of Electrical Engineering, Adjunct Professor of Bioengineering, Mechanical Engineering, and Surgery at the University of Washington. He received the B.S. degree in Engineering and Applied Science from Yale University in 1977, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of California, Berkeley, in 1982 and 1985 respectively. Before graduate study, he held engineering positions in digital hardware and software design, office automation, and medical image processing. At Berkeley he pursued thesis research in multiple target tracking in medical images and the control of time-optimal voluntary human movement. From 1986 to 1989 he worked on the remote control of robot manipulators in the Man-Machine Systems Group in the Automated Systems Section of the NASA Jet Propulsion Laboratory, Caltech. He supervised that group from 1988 to 1989. Since September 1989, he has been at the University of Washington in Seattle, where he has been Professor of Electrical Engineering since 1997, and served as Associate Chair for Education from 1999 to 2001. He was awarded the National Science Foundation's Presidential Young Investigator Award and the Early Career Achievement Award from the IEEE Engineering in Medicine and Biology Society and is an IEEE Fellow. His currently active interests include haptic displays on the Internet, and surgical robotics. He has consulted on robotic surgical devices with the Food and Drug Administration Panel on surgical devices.

5/25/12 Malcolm MacIver Robotic Electrolocation Electrolocation is used by the weakly electric fish of South America and Africa to navigate and hunt in murky water where vision is ineffective. These fish generate an AC electric field that is perturbed by objects nearby that differ in impedance from the water. Electroreceptors covering the body of the fish report the amplitude and phase of the local field. The animal decodes electric field perturbations into information about its surroundings. Electrolocation is fundamentally divergent from optical vision (and other imaging methods) that create projective images of 3D space. Current electrolocation methods are also quite different from electrical impedance tomography. We will describe current electrolocation technology, and progress on development of a propulsion system inspired by electric fish to provide the precise movement capabilities that this short-range sensing approach requires.

Dr. Malcolm MacIver is Associate Professor at Northwestern University with joint appointments in the Mechanical Engineering and Biomedical Engineering departments. He is interested in the neural and mechanical basis of animal behavior, evolution, and the implications of the close coupling of movement with gathering information for our understanding of intelligence and consciousness. He also develops immersive art installations that have been exhibited internationally.

6/1/12 Drew Bagnell Imitation Learning, Inverse Optimal Control and Purposeful Prediction

Programming robots is hard. While demonstrating a desired behavior may be easy, designing a system that behaves this way is often difficult, time consuming, and ultimately expensive. Machine learning promises to enable "programming by demonstration" for developing high-performance robotic systems. Unfortunately, many approaches that utilize the classical tools of supervised learning fail to meet the needs of imitation learning. I'll discuss the problems that result from ignoring the effect of actions influencing the world, and I'll highlight simple "reduction- based" approaches that, both in theory and in practice, mitigate these problems. I'll demonstrate the resulting approach on the development of reactive controllers for cluttered UAV flight and for video game systems. Additionally, robotic systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to poor and myopic performance. While planners have demonstrated dramatic success in applications ranging from legged locomotion to outdoor unstructured navigation, such algorithms rely on fully specified cost functions that map sensor readings and environment models to a scalar cost. Such cost functions are usually manually designed and programmed. Recently, our group has developed a set of techniques that learn these functions from human demonstration by applying an Inverse Optimal Control (IOC) approach to find a cost function for which planned behavior mimics an expert's demonstration. These approaches shed new light on the intimate connections between probabilistic inference and optimal control. I'll consider case studies in activity forecasting of drivers and pedestrians as well as the imitation learning of robotic locomotion and rough-terrain navigation. These case-studies highlight key challenges in applying the algorithms in practical settings. J. Andrew Bagnell is an Associate Professor with the Robotics Institute, the National Robotics Engineering Center and the Machine Learning Department at Carnegie Mellon University. His research centers on the theory and practice of machine learning for decision making and robotics.

Dr. Bagnell directs the Learning, AI, and Robotics Laboratory (LAIRLab) within the Robotics Institute. Dr. Bagnell serves as the director of the Robotics Institute Summer Scholars program, a summer research experience in robotics for undergraduates throughout the world. Dr. Bagnell and his group's research has won awards in both the robotics and machine learning communities including at the International Conference on Machine Learning, Robotics Science and Systems, and the International Conference on Robotics and Automation. Dr. Bagnell's current projects focus on machine learning for dexterous manipulation, decision making under uncertainty, ground and aerial vehicle control, and robot perception. Prior to joining the faculty, Prof. Bagnell received his doctorate at Carnegie Mellon in 2004 with a National Science Foundation Graduate Fellowship and completed undergraduate studies with highest honors in electrical engineering at the University of Florida.

Last changed Mon, 2013-05-06 18:18