Title: Sim2Real Collision Avoidance for Indoor Navigation of Mobile Robots via Deep Reinforcement Learning

Advisor: Sergey Levine

Supervisory Committee: Sergey Levine (Chair), Behcet Acikmese (GSR, Aeronautics and Astronautics), Steve Seitz, Emo Todorov, and Larry Zitnick (Facebook)

Abstract: Deep reinforcement learning has emerged as a promising technique for automatically acquiring control policies that can process raw sensory inputs and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. Here, we explore the following question: can we train vision-based navigation policies entirely in simulation, and then transfer them into the real world without a single real training instance? We propose a learning method that we call (CAD)2RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models. Our learned collision avoidance policy is represented by a deep convolutional neural network that directly processes raw monocular images and outputs velocity commands. This policy is trained entirely in simulation, with a Monte Carlo policy evaluation algorithm that directly optimizes the network’s ability to produce collision-free path. By highly randomizing the rendering settings for our simulated training set, we show that we can train a policy that generalizes to the real world.

Place: 
Sieg 322
When: 
Tuesday, April 25, 2017 - 13:30 to 15:30