Title: Disentangling Variational Autoencoders for Nonlinear Group Factor Analysis
Advisors: Emily Fox & Kevin Jamieson
Abstract: Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. Following oi-VAE we develop Faceback, a deep generative model capable of handling multi-modal data, robust to missing observations, and with a prior that encourages disentanglement between the groups and the latent dimensions. We demonstrate that oi-VAE and Faceback yield meaningful notions of interpretability in the analysis of motion capture, MEG data, and pictures of faces captured from varying poses and perspectives.