I will present in this talk a theory of anytime, approximate inference, which explains some of the most influential algorithms in probabilistic reasoning, yet is based on one of the older ideas in symbolic reasoning: Relaxations. According to this theory, the fundamental notion of approximation is that of "relaxing" logical constraints (equalities in particular) for the purpose of decomposing a problem into smaller pieces that can be solved independently. A second fundamental notion is that of "compensation", which calls for imposing weaker and, typically, probabilistic notions of equality to compensate for the relaxed equalities. The third fundamental notion of the presented theory is that of "recovery," where some of the relaxed equalities are incrementally recovered, based on an assessment of their impact on improving the quality of approximations. I will discuss how this theory subsumes one of the most influential algorithms in probabilistic inference: loopy belief propagation and some of its generalizations, and comment on its relevance to lifted probabilistic inference. I will also show how this theory has formed the basis of an award-winning solver at the third UAI inference competition in 2010. I will finally discuss the relationship between this theory and current developments in symbolic reasoning -- in particular, how advances in knowledge compilation can impact the state of the art in approximate (and exact) probabilistic inference.
Sequence census methods such as ChIP-seq have begun to produce an unprecedented amount of genome-anchored data. Numerous techniques exist to analyze the results of single assays, but genomics research has lacked an integrative method to identify patterns from multiple experiments simultaneously, while taking full advantage of the high-resolution data now available. We have developed such a method, which employs a dynamic Bayesian network to discover joint patterns across different experiment types. We have applied this method to ENCODE chromatin data for the human chronic myeloid leukemia cell line K562. In an unsupervised fashion, we have identified patterns associated with transcription initiation and elongation, as well as transcriptional repression, finding the locations of genes without using DNA sequence, mRNA or multispecies conservation data. We have also successfully performed a semi-supervised identification of enhancer regions across the genome using a curated set of seed enhancers. In both the unsupervised and semi-supervised cases, the method identifies patterns that elucidate the relationships among open chromatin, various transcription factors and histone modifications.
In this talk, I'll discuss a journey taken by my research interests leading from dynamic graphical models to image segmentation. I'll start out discussing dynamic graphical models (DGMs) and their efficient inference. Entailed in performing inference in such models is the problem of identifying a vertex cut that determines the boundary of a critical periodic graphical component. This component is used when the model is expanded to an arbitrary length, and also determines a local junction tree segment used as part of this expansion. Depending on the boundary of this component, inference costs can change by orders of magnitude. There are many ways of judging the quality of the component without performing inference itself, including the standard vertex cut criterion. Most interestingly, however, is the case where the DGM may be sparse, and the quality is determined by an objective that takes into account this sparsity. It turns out that such an objective is submodular. Considering this, and when vertex cut is seen as standard graph cut, a new problem is naturally introduced that involves finding a minimal cut in a graph where edges are judged not by a sum of edge weights but instead by a submodular function, a problem we call cooperative cut. The remainder of the talk will provide a brief background on submodularity, cooperative cut, and various hardness and approximate solutions to this problem, and how it can be applied to image segmentation. In this latter application, results show significant improvements over the standard (node submodular) graph cut approaches, in particular for images that exhibit a "shrinking bias" problem and for those images with smooth illumination gradients. Joint work with Stefanie Jegelka, with earlier contributions from Chris Bartels.