Title: Structured Deep Visual Models for Robot Manipulation
Advisor: Dieter Fox
Supervisory Committee: Dieter Fox (Chair), Blake Hannaford (GSR, EE), Sidd Srinivasa, and Byron Boots (Georgia Tech)
Abstract: The ability to predict how an environment changes based on forces applied to it is fundamental for a robot to achieve specific goals. Traditionally in robotics, this problem is addressed through the use of pre-specified models or physics simulators, taking advantage of prior knowledge of the problem structure. While these models are general and have broad applicability, they depend on accurate estimation of model parameters such as object shape, mass, friction etc. On the other hand, learning based methods such as Predictive State Representations or more recent deep learning approaches have looked at learning these models directly from raw perceptual information in a model-free manner. These methods operate on raw data without any intermediate parameter estimation, but lack the structure and generality of model-based techniques.