Machine Learning Seminars

Sponsored by

Title: Probabilistic programming: representations, inference, and applications to natural language
Speaker: Noah Goodman (Stanford)
When: Friday, January 13, 2017 - 12:00
Location: EEB 037
Probabilistic programing languages (PPLs) are formal languages for probabilistic modeling that describe complex distributions via programs with random choices. As a description language PPLs are a convenient and powerful way to construct models. I will show several examples drawn from cognitive science, focussing on language understanding: I will use a PPL to construct a model of language understanding as social reasoning; this model captures aspects of vague and figurative language. PPL implementations make it possible to separate inference algorithms from model representation by providing universal inference engines. Many techniques have been explored for inference, but they all struggle with efficiency for different model classes. Instead, we have been exploring systems that learn efficient inference strategies. I will discuss “deep amortized inference for PPLs”, a system that optimizes deep neural network “guide programs” to capture the implicit posterior distribution of a probabilistic program.

Bio: Noah Goodman is Associate Professor of Psychology, Computer Science (by courtesy), and Linguistics (by courtesy) at Stanford University. He studies the computational basis of natural and artificial intelligence, merging behavioral experiments with formal methods from statistics and programming languages. His research topics include language understanding, social reasoning, concept learning, probabilistic programming languages, and applications. ProfessorGoodman received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab. Professor Goodmanhas published more than 150 papers in fields including psychology, linguistics, computer science, and mathematics. His work has been recognized by the James S. McDonnell Foundation Scholar Award, the Roger N. Shepard Distinguished Visiting Scholar Award, the Alfred P. Sloan Research Fellowship in Neuroscience, and six computational modeling prizes from the Cognitive Science Society. He is Fellow of the Uber AI Labs, Academic Co-Founder and Advisor of Gamalon Labs, and advisor to several other start-ups.

 

Title: Computational and Statistical Issues in Genomics
Speaker: David Heckerman (MSR)
When: Tuesday, November 29, 2016 - 12:00
Location: CSE 305
In the last decade, genomics has seen an explosion in the production of data due to the decreasing costs and processing times associated with DNA sequencing. I will discuss how the cloud as well as techniques from mathematics and computer science help take advantage of this big data. My discussion will include linear mixed models, a popular model for association studies. I will show how these models work well and, if there is time, talk about why they do.

 

Title: Sparse Additive Modeling
Speaker: Noah Simon (Statistics)
When: Tuesday, November 15, 2016 - 12:00
Location: CSE 305
Our ability to collect data has exploded over recent years. Across science, we now collect thousands of measurements on each person. One of the most common tasks of interest is to use these measurements to predict some response. Prediction methods must balance 3 objectives: predictive performance, computational tractability, and, in many applications, interpretability. In this talk we will discuss a broad class of models which effectively balance these objectives: Sparse additive models induced by combining a structural semi-norm and sparsity penalty. These are more flexible than the standard linear penalized model, but maintain its interpretability and computational tractability. We will show when these penalties can and cannot be combined to induce the desired structure and sparsity. We will give an efficient algorithm for fitting a wide class of these models. We will also discuss a particular type of structural penalty (hierarchical sparsity) which has additional nice behaviour. We will consider an application: Predicting disease phenotype in inflammatory bowel disease from gene expression measurements. Time permitting, we will additionally touch on the theoretical properties of these sparse additive estimators.

 

Title: Develop the Curing AI for Precision Medicine
Speaker: Hoifung Poon (MSR)
When: Tuesday, November 1, 2016 - 12:00
Location: CSE 305
Medicine today is imprecise. For the top 20 prescription drugs in the U.S., 80% of patients are non-responders. Recent disruptions in sensor technology have enabled precise categorization of diseases and treatment effects, with the $1000 human genome being a prime example. However, progress in precision medicine is difficult, as large-scale knowledge and reasoning become the ultimate bottlenecks in deciphering cancer and other complex diseases. Today, it takes hours for a molecular tumor board of many specialists to review one patient’s omics data and make treatment decisions. With 1.6 million new cancer cases and 600 thousand deaths in the U.S. each year, this is clearly not scalable. In this talk, I'll present Project Hanover and our latest efforts in advancing machine reading and predictive analytics for personalized cancer treatment and chronic disease management.

 

Title: Machine learning analysis of big, heterogeneous genomic data
Speaker: Bill Noble
When: Tuesday, October 18, 2016 - 12:00
Location: CSE 305
The field of genomics over the past decade has been driven by technological advances as the throughput and diversity of genomics assays has increased dramatically. To help biologists make sense of the resulting big, heterogeneous data sets, we have developed an unsupervised learning strategy that uses a dynamic Bayesian network in which "time" corresponds to genomic position. I will describe our methodology as well as our recent efforts to improve the utility and interpretability of the resulting annotation. I will also discuss a tensor factorization method that we have successfully employed to impute missing genomics data, allowing us to accurately predict the outcome of genomics experiments that have not yet been performed.

 

Title: Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods
Speaker: Hanie Sedghi (AI2)
When: Tuesday, October 4, 2016 - 12:00
Location: CSE 305
Neural networks have revolutionized performance across multiple domains such as computer vision and speech recognition. They provide a versatile tool for approximating a wide class of functions but a theoretical understanding is mostly lacking. Training a neural network is a highly non-convex problem and backpropagation can get stuck in spurious local optima. We have a computationally efficient method for training neural networks that also has guaranteed risk bounds. It is based on tensor decomposition which is guaranteed to converge to globally optimal solution under mild conditions. Moreover, we extend this result to tackling the more challenging task of training Recurrent Neural Networks.

 

Title: Semantic Parsing to Probabilistic Programs for Situated Question Answering
Speaker: Jayant Krishnamurthy (AI2)
When: Tuesday, May 31, 2016 - 12:00
Location: CSE 305
Existing models for situated question answering make strong independence assumptions that negatively impact their accuracy. These assumptions, while empirically false, are necessary to facilitate inference because the number of joint question/environment interpretations is extremely large, typically superexponential in the number of objects in the environment. We present Parsing to Probabilistic Programs (P3), a novel situated question answering model that embraces approximate inference to eliminate these independence assumptions and enable the use of arbitrary global features of the question/environment interpretation. Our key insight is to treat semantic parses as probabilistic programs that are executed nondeterministically, and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 diagram questions from a science domain, finding that our approach outperforms several competitive baselines.

 

,

 

Title: Learning to Converse: An End-to-End Neural Approach
Speaker: Michel Galley (MSR)
When: Tuesday, May 24, 2016 - 12:00
Location: CSE 305
Until recently the goal of training open-domain conversational systems that emulate human conversation has seemed elusive. However, the vast quantities of conversational exchanges now available from social media, instant messaging, and other online resources enable the building of data-driven models that can engage in natural and sustained conversations. In this talk, I will present an open-domain LSTM-based conversational model trained end-to-end from millions of conversations, without any implicit assumptions about dialog structure. I will focus on the technical challenges in applying neural models to conversational data, in particular, (1) overcoming the overwhelming prevalence of bland and safe responses (e.g., "I don't know"), and (2) promoting responses that reflect a consistent persona. Finally, I will overview our current efforts towards more grounded and goal-oriented conversations. If time permits, I will show a demo of our conversational system. This is joint work with Jiwei Li, Alessandro Sordoni, Chris Brockett, Jianfeng Gao, and Bill Dolan.

 

,

 

Title: Supersizing Self-Supervision: ConvNets and Common sense without manual supervision
Speaker: Abhinav Gupta (CMU)
When: Tuesday, May 3, 2016 - 12:00
Location: CSE 305
In this talk, I will discuss how to learn visual representation and common sense knowledge without using any manual supervision. First, I am going to discuss how we can learn ConvNets in a completely unsupervised manner using auxiliary tasks. Specifically, I am going to demonstrate how spatial context in images and viewpoint changes in videos can be used to train visual representations. Then, I am going to introduce NEIL (Never Ending Image Learner), a computer program that runs 24x7 to automatically build visual detectors and common sense knowledge from web data. NEIL is an attempt to develop a large and rich visual knowledge base with minimum human labeling effort. Every day, NEIL scans through images of our mundane world, and little by little, it learns common sense relationships about our world. For example, with no input from humans, NEIL can tell you that trading floors are crowded and babies have eyes. In eight months, NEIL has analyzed more than 25 million images, labeled ~4M annotations (boxes and segments), learned models for 7500 concepts and discovered more than 20K common sense relationships. Finally, in an effort to diversify the knowledge base, I will briefly discuss how NEIL is also being extended to a physical robot which learns about knowledge for actions.

 

,

 

Title: Deep Robotic Learning
Speaker: Sergey Levine
When: Tuesday, April 19, 2016 - 12:00
Location: CSE 305
Humans and animals have a remarkable ability to autonomously acquire new behaviors. My work is concerned with designing algorithms that aim to bring this ability to robots and other autonomous systems that must make decisions in complex, unstructured environments. A central challenge that such algorithms must address is to learn behaviors with representations that are sufficiently general and expressive to handle the wide range of motion skills that are needed for real-world applications. This requires processing complex, high-dimensional inputs and outputs, such as camera images and joint torques, and providing considerable generality to various physical platforms and behaviors. I will present some of my recent work on policy learning, demonstrating that complex, expressive policies represented by deep neural networks can be used to learn controllers for a wide range of robotic platforms, including dexterous hands, autonomous aerial vehicles, simulated bipedal walkers, and robotic arms. I will show how deep convolutional neural networks can be trained to directly learn policies that combine visual perception and control, acquiring the entire mapping from rich visual stimuli to motor torques on a PR2 robot. I will also present some recent work on scaling up deep robotic learning on a cluster consisting of multiple robotic arms, and demonstrate results for learning grasping strategies that involve continuous feedback and hand-eye coordination.

 

,