Title: Improved Object Pose Estimation via Deep Pre-touch Sensing

Advisors: Joshua Smith and Dieter Fox

Abstract: For certain manipulation tasks, pose estimation from head-mounted cameras may not be sufficiently accurate due to the inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibation of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we utilize a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. We find that after a sequence of scans, the object can typically be localized to within 0.5 cm of its true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.

Place: 
CSE 303
When: 
Friday, May 12, 2017 - 11:00 to 12:30