Machine Learning

Machine Learning Seminars

Sponsored by

Tuesday, April 25, 2017 - 12:00

Talk title TBD

Speaker: Noah Smith
Location: CSE 305

Tuesday, March 7, 2017 - 12:00

Talk title TBD

Speaker: Daniela Witten
Location: CSE 305

Tuesday, February 21, 2017 - 12:00

Talk title TBD

Speaker: Sham Kakade
Location: CSE 305

Tuesday, February 7, 2017 - 12:00

Talk title TBD

Speaker: Roozbeh Mottaghi (AI2)
Location: CSE 305

Tuesday, January 24, 2017 - 12:00

Interactive and Interpretable Machine Learning Models for Human Machine Collaboration

Speaker: Been Kim (AI2)
Location: CSE 305
I envision a system that enables successful collaborations between humans and machine learning models by harnessing the relative strength to accomplish what neither can do alone. Machine learning techniques and humans have skills that complement each other --- machine learning techniques are good at computation on data at the lowest level of granularity, whereas people are better at abstracting knowledge from their experience, and transferring the knowledge across domains. The goal of my research is to develop a framework for human-in-the-loop machine learning that enables people to interact effectively with machine learning models to make better decisions using large datasets, without requiring in-depth knowledge about machine learning techniques. In this talk, I present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the “quintessential” observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants’ understanding when using explanations produced by BCM, compared to those given by prior art. I demonstrate the application of this model for an educational domain in which teachers cluster program

Friday, January 13, 2017 - 12:00

Probabilistic programming: representations, inference, and applications to natural language

Speaker: Noah Goodman (Stanford)
Location: EEB 037
Probabilistic programing languages (PPLs) are formal languages for probabilistic modeling that describe complex distributions via programs with random choices. As a description language PPLs are a convenient and powerful way to construct models. I will show several examples drawn from cognitive science, focussing on language understanding: I will use a PPL to construct a model of language understanding as social reasoning; this model captures aspects of vague and figurative language. PPL implementations make it possible to separate inference algorithms from model representation by providing universal inference engines. Many techniques have been explored for inference, but they all struggle with efficiency for different model classes. Instead, we have been exploring systems that learn efficient inference strategies. I will discuss “deep amortized inference for PPLs”, a system that optimizes deep neural network “guide programs” to capture the implicit posterior distribution of a probabilistic program.

Tuesday, November 29, 2016 - 12:00

Computational and Statistical Issues in Genomics

Speaker: David Heckerman (MSR)
Location: CSE 305
In the last decade, genomics has seen an explosion in the production of data due to the decreasing costs and processing times associated with DNA sequencing. I will discuss how the cloud as well as techniques from mathematics and computer science help take advantage of this big data. My discussion will include linear mixed models, a popular model for association studies. I will show how these models work well and, if there is time, talk about why they do.

Tuesday, November 15, 2016 - 12:00

Sparse Additive Modeling

Speaker: Noah Simon (Statistics)
Location: CSE 305
Our ability to collect data has exploded over recent years. Across science, we now collect thousands of measurements on each person. One of the most common tasks of interest is to use these measurements to predict some response. Prediction methods must balance 3 objectives: predictive performance, computational tractability, and, in many applications, interpretability. In this talk we will discuss a broad class of models which effectively balance these objectives: Sparse additive models induced by combining a structural semi-norm and sparsity penalty. These are more flexible than the standard linear penalized model, but maintain its interpretability and computational tractability. We will show when these penalties can and cannot be combined to induce the desired structure and sparsity. We will give an efficient algorithm for fitting a wide class of these models. We will also discuss a particular type of structural penalty (hierarchical sparsity) which has additional nice behaviour. We will consider an application: Predicting disease phenotype in inflammatory bowel disease from gene expression measurements. Time permitting, we will additionally touch on the theoretical properties of these sparse additive estimators.

Tuesday, November 1, 2016 - 12:00

Develop the Curing AI for Precision Medicine

Speaker: Hoifung Poon (MSR)
Location: CSE 305
Medicine today is imprecise. For the top 20 prescription drugs in the U.S., 80% of patients are non-responders. Recent disruptions in sensor technology have enabled precise categorization of diseases and treatment effects, with the $1000 human genome being a prime example. However, progress in precision medicine is difficult, as large-scale knowledge and reasoning become the ultimate bottlenecks in deciphering cancer and other complex diseases. Today, it takes hours for a molecular tumor board of many specialists to review one patient’s omics data and make treatment decisions. With 1.6 million new cancer cases and 600 thousand deaths in the U.S. each year, this is clearly not scalable. In this talk, I'll present Project Hanover and our latest efforts in advancing machine reading and predictive analytics for personalized cancer treatment and chronic disease management.

Tuesday, October 18, 2016 - 12:00

Machine learning analysis of big, heterogeneous genomic data

Speaker: Bill Noble
Location: CSE 305
The field of genomics over the past decade has been driven by technological advances as the throughput and diversity of genomics assays has increased dramatically. To help biologists make sense of the resulting big, heterogeneous data sets, we have developed an unsupervised learning strategy that uses a dynamic Bayesian network in which "time" corresponds to genomic position. I will describe our methodology as well as our recent efforts to improve the utility and interpretability of the resulting annotation. I will also discuss a tensor factorization method that we have successfully employed to impute missing genomics data, allowing us to accurately predict the outcome of genomics experiments that have not yet been performed.

Pages