A lot of research has gone into tracking specific models using a single depth camera over the past few years. This project aims to provide a unified framework for tracking any arbitrary articulated model, given it's geometric and kinematic structure. Our approach uses dense input data (computing an error term on every pixel), which we are able to process in real-time by leveraging the power of GPGPU programming and very efficient representation of model geometry with signed distance functions. This approach has proven successful on a wide variety of models including human hands, human bodies, robot arms, and articulated objects.

You can download the code on GitHub.