Machine Learning Seminars

Sponsored by

Title: Streaming Optimization Methods: Stochastic Gradient Descent, Momentum, and Newton's Method
Speaker: Sham Kakade
When: Tuesday, February 21, 2017 - 12:00
Location: CSE 305
Theory and practice differ widely in our more basic optimization algorithms, such as stochastic gradient descent. For example, for the practically relevant problem of setting parameters (such as learning rates), principled methods are almost never used in practice. Can we get a better a handle of the true behavior this widely used algorithm (as opposed to worst case analysis)? Can we use these insights to provide better algorithms?

This talk by Sham Kakade will survey recent work on this question. In particular, we seek to understand how fast we can parallelize these algorithms using simple averaging and mini-batching procedures. The talk will also discuss ongoing work with regards to streaming methods for momentum (i.e. acceleration), for implementing second order methods, and for non-convex optimization.

 

Title: Physics-based Visual Reasoning
Speaker: Roozbeh Mottaghi (AI2)
When: Tuesday, February 7, 2017 - 12:00
Location: CSE 305
Despite recent progress, AI is still far from understanding the physics of the world, and there is a large gap between the abilities of humans and the state-of-the-art methods. In this talk, I will focus on physics-based scene understanding and visual reasoning, which is a crucial next step in computer vision and AI.

Bio: Roozbeh Mottaghi is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). Prior to joining AI2, he was a post-doctoral researcher at the Computer Science Department at Stanford University. He obtained his PhD in Computer Science in 2013 from University of California, Los Angeles. His research is mainly focused on computer vision and machine learning.

 

Title: Interactive and Interpretable Machine Learning Models for Human Machine Collaboration
Speaker: Been Kim (AI2)
When: Tuesday, January 24, 2017 - 12:00
Location: CSE 305
I envision a system that enables successful collaborations between humans and machine learning models by harnessing the relative strength to accomplish what neither can do alone. Machine learning techniques and humans have skills that complement each other --- machine learning techniques are good at computation on data at the lowest level of granularity, whereas people are better at abstracting knowledge from their experience, and transferring the knowledge across domains. The goal of my research is to develop a framework for human-in-the-loop machine learning that enables people to interact effectively with machine learning models to make better decisions using large datasets, without requiring in-depth knowledge about machine learning techniques.
In this talk, I present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the “quintessential” observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants’ understanding when using explanations produced by BCM, compared to those given by prior art. I demonstrate the application of this model for an educational domain in which teachers cluster program

Bio: Been Kim is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). She is also an affiliate faculty in the Department of Computer Science & Engineering at the University of Washington. Her research focuses on building interpretable machine learning. The vision of her research is to make humans empowered by machine learning, not overwhelmed by it. She received her PhD. from MIT. Prior to her PhD, she worked at the MathWorks as a software engineer.

 

Title: Probabilistic programming: representations, inference, and applications to natural language
Speaker: Noah Goodman (Stanford)
When: Friday, January 13, 2017 - 12:00
Location: EEB 037
Probabilistic programing languages (PPLs) are formal languages for probabilistic modeling that describe complex distributions via programs with random choices. As a description language PPLs are a convenient and powerful way to construct models. I will show several examples drawn from cognitive science, focussing on language understanding: I will use a PPL to construct a model of language understanding as social reasoning; this model captures aspects of vague and figurative language. PPL implementations make it possible to separate inference algorithms from model representation by providing universal inference engines. Many techniques have been explored for inference, but they all struggle with efficiency for different model classes. Instead, we have been exploring systems that learn efficient inference strategies. I will discuss “deep amortized inference for PPLs”, a system that optimizes deep neural network “guide programs” to capture the implicit posterior distribution of a probabilistic program.

Bio: Noah Goodman is Associate Professor of Psychology, Computer Science (by courtesy), and Linguistics (by courtesy) at Stanford University. He studies the computational basis of natural and artificial intelligence, merging behavioral experiments with formal methods from statistics and programming languages. His research topics include language understanding, social reasoning, concept learning, probabilistic programming languages, and applications. ProfessorGoodman received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab. Professor Goodmanhas published more than 150 papers in fields including psychology, linguistics, computer science, and mathematics. His work has been recognized by the James S. McDonnell Foundation Scholar Award, the Roger N. Shepard Distinguished Visiting Scholar Award, the Alfred P. Sloan Research Fellowship in Neuroscience, and six computational modeling prizes from the Cognitive Science Society. He is Fellow of the Uber AI Labs, Academic Co-Founder and Advisor of Gamalon Labs, and advisor to several other start-ups.

 

Title: Computational and Statistical Issues in Genomics
Speaker: David Heckerman (MSR)
When: Tuesday, November 29, 2016 - 12:00
Location: CSE 305
In the last decade, genomics has seen an explosion in the production of data due to the decreasing costs and processing times associated with DNA sequencing. I will discuss how the cloud as well as techniques from mathematics and computer science help take advantage of this big data. My discussion will include linear mixed models, a popular model for association studies. I will show how these models work well and, if there is time, talk about why they do.

 

Title: Sparse Additive Modeling
Speaker: Noah Simon (Statistics)
When: Tuesday, November 15, 2016 - 12:00
Location: CSE 305
Our ability to collect data has exploded over recent years. Across science, we now collect thousands of measurements on each person. One of the most common tasks of interest is to use these measurements to predict some response. Prediction methods must balance 3 objectives: predictive performance, computational tractability, and, in many applications, interpretability. In this talk we will discuss a broad class of models which effectively balance these objectives: Sparse additive models induced by combining a structural semi-norm and sparsity penalty. These are more flexible than the standard linear penalized model, but maintain its interpretability and computational tractability. We will show when these penalties can and cannot be combined to induce the desired structure and sparsity. We will give an efficient algorithm for fitting a wide class of these models. We will also discuss a particular type of structural penalty (hierarchical sparsity) which has additional nice behaviour. We will consider an application: Predicting disease phenotype in inflammatory bowel disease from gene expression measurements. Time permitting, we will additionally touch on the theoretical properties of these sparse additive estimators.

 

Title: Develop the Curing AI for Precision Medicine
Speaker: Hoifung Poon (MSR)
When: Tuesday, November 1, 2016 - 12:00
Location: CSE 305
Medicine today is imprecise. For the top 20 prescription drugs in the U.S., 80% of patients are non-responders. Recent disruptions in sensor technology have enabled precise categorization of diseases and treatment effects, with the $1000 human genome being a prime example. However, progress in precision medicine is difficult, as large-scale knowledge and reasoning become the ultimate bottlenecks in deciphering cancer and other complex diseases. Today, it takes hours for a molecular tumor board of many specialists to review one patient’s omics data and make treatment decisions. With 1.6 million new cancer cases and 600 thousand deaths in the U.S. each year, this is clearly not scalable. In this talk, I'll present Project Hanover and our latest efforts in advancing machine reading and predictive analytics for personalized cancer treatment and chronic disease management.

 

Title: Machine learning analysis of big, heterogeneous genomic data
Speaker: Bill Noble
When: Tuesday, October 18, 2016 - 12:00
Location: CSE 305
The field of genomics over the past decade has been driven by technological advances as the throughput and diversity of genomics assays has increased dramatically. To help biologists make sense of the resulting big, heterogeneous data sets, we have developed an unsupervised learning strategy that uses a dynamic Bayesian network in which "time" corresponds to genomic position. I will describe our methodology as well as our recent efforts to improve the utility and interpretability of the resulting annotation. I will also discuss a tensor factorization method that we have successfully employed to impute missing genomics data, allowing us to accurately predict the outcome of genomics experiments that have not yet been performed.

 

Title: Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods
Speaker: Hanie Sedghi (AI2)
When: Tuesday, October 4, 2016 - 12:00
Location: CSE 305
Neural networks have revolutionized performance across multiple domains such as computer vision and speech recognition. They provide a versatile tool for approximating a wide class of functions but a theoretical understanding is mostly lacking. Training a neural network is a highly non-convex problem and backpropagation can get stuck in spurious local optima. We have a computationally efficient method for training neural networks that also has guaranteed risk bounds. It is based on tensor decomposition which is guaranteed to converge to globally optimal solution under mild conditions. Moreover, we extend this result to tackling the more challenging task of training Recurrent Neural Networks.

 

Title: Semantic Parsing to Probabilistic Programs for Situated Question Answering
Speaker: Jayant Krishnamurthy (AI2)
When: Tuesday, May 31, 2016 - 12:00
Location: CSE 305
Existing models for situated question answering make strong independence assumptions that negatively impact their accuracy. These assumptions, while empirically false, are necessary to facilitate inference because the number of joint question/environment interpretations is extremely large, typically superexponential in the number of objects in the environment. We present Parsing to Probabilistic Programs (P3), a novel situated question answering model that embraces approximate inference to eliminate these independence assumptions and enable the use of arbitrary global features of the question/environment interpretation. Our key insight is to treat semantic parses as probabilistic programs that are executed nondeterministically, and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 diagram questions from a science domain, finding that our approach outperforms several competitive baselines.

 

,