Machine Learning Seminars

Sponsored by

Title: Active Learning for Convex Regression
Speaker: Kevin Jamieson
When: Tuesday, February 6, 2018 - 12:00
Location: CSE 403
In this work, we will introduce the first principled adaptive-sampling procedure for learning a convex function in the L∞ norm, a problem that arises often in economics, psychology, and the social sciences. We present a function-specific measure of complexity and use it to prove that our algorithm is information theoretically near-optimal in a strong, function-specific sense. We also corroborate our theoretical contributions with extensive numerical experiments, finding that our method substantially outperforms passive, uniform sampling for favorable synthetic and data-derived functions in low-noise settings with large sampling budgets. Our results also suggest an idealized “oracle strategy”, which we use to gauge the potential for deploying the adaptive-sampling strategy on any function in any particular setting.

 

Title: Subspace clustering via matching pursuits
Speaker: Michael Tschannen (ETH Zürich)
When: Tuesday, January 9, 2018 - 12:10
Location: CSE 403
Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations, and dimensions are all unknown. Among the large variety of subspace clustering algorithms, sparsity-based algorithms have attracted special attention thanks to their excellent performance in practical applications. A prominent example is the sparse subspace clustering (SSC) algorithm, which performs spectral clustering based on an adjacency matrix obtained by sparsely representing each data point in terms of all the other data points via the Lasso. When the number of data points is large or the dimension of the ambient space is high, the computational complexity of SSC quickly becomes prohibitive. Replacing the Lasso in SSC by the greedy orthogonal matching pursuit (OMP) or matching pursuit (MP) algorithm significantly reduces computational complexity while often maintaining clustering accuracy. In this talk, we present an analytical performance characterization of SSC-OMP and SSC-MP for noisy data. Both SSC-OMP and SSC-MP are shown to succeed even when the subspaces intersect and when the data points are contaminated by severe noise. Analytical results in combination with numerical results indicate that both SSC-OMP and SSC-MP with a data-dependent stopping criterion automatically detect the dimensions of the subspaces underlying the data. Finally, to further reduce computational complexity of SSC-OMP and SSC-MP, we consider the scenario where the dimensionality of the data is reduced by random projections, and present corresponding performance guarantees.

 

Title: Unified optimization for self-learning robust penalties
Speaker: Sasha Aravkin
When: Tuesday, November 28, 2017 - 12:15
Location: CSE 403
We consider extended optimization formulations that unify tasks classically done sequentially or hierarchically. In particular, we develop self-tuning robust penalties, that require optimizing over both model parameters and shape parameters (which are typically computed with cross-validation or black-box optimization). If time permits, we will also show extended formulations that simultaneously learn a model and detect outliers in the input data. Bio: Aleksandr Aravkin is a Washington Research Foundation Data Science Assistant Professor in Applied Mathematics at the University of Washington. He is also a Fellow at the UW eSciences Institute, and an adjunct professor in the Mathematics and Statistics. He received his M.S. in Statistics and PhD in Mathematics (Optimization) from the University of Washington in 2010. From 2010-2012, he was a postdoctoral fellow in Computer Science and Earth and Ocean Sciences at UBC, working on robust approaches for large-scale inverse problems. From 2012-2015, Dr. Aravkin was a Research Staff Member at the IBM TJ Watson Research center, and an adjunct professor at Columbia University in Computer Science and IEOR departments.

 

Title: Density Tree and Density Ranking in Singular Measures
Speaker: Yen-Chi Chen
When: Tuesday, October 17, 2017 - 12:00
Location: CSE 403
A density tree (also known as a cluster tree of a probability density function) is a tool in topological data analysis that uses a tree structure to represent the shape of a density function. Even if the density function is multivariate, a density tree can always be displayed on a two-dimensional plane, making it an ideal tool for visualizing the shape of a multivariate dataset. However, in complex datasets such as GPS data, the underlying distribution function is singular so the usual density function and density tree no longer exist. To analyze this type of data and generalize the density tree, we introduce the concept of density ranking and ranking tree (also called an alpha-tree). We then show that one can consistently estimate the density ranking and the ranking tree using a kernel density estimator. Based on the density ranking, we introduce several geometric and topological summary curves for analyzing GPS datasets. Bio: Yen-Chi Chen is an Assistant Professor in the Department of Statistics, a data science fellow in the eScience Institute, and a statistician in the National Alzheimer's Coordinating Center at the University of Washington. Yen-Chi obtained a PhD in Statistics from Carnegie Mellon University in June 2016, advised by Professor Larry Wasserman and Christopher Genovese. His current research is on nonparametric statistics, topological data analysis, cluster analysis, and applications in Astronomy, industrial engineering, and medical research.

 

Title: Causal Learning
Speaker: Bernhard Schölkopf
When: Wednesday, August 16, 2017 - 15:00
Location: CSE 305
In machine learning, we use data to automatically find dependences in the world, with the goal of predicting future observations. Most machine learning methods build on statistics, but one can also try to go beyond this, assaying causal structures underlying statistical dependences. Can such causal knowledge help prediction in machine learning tasks? We argue that this is indeed the case, due to the fact that causal models are more robust to changes that occur in real world datasets. We discuss implications of causality for machine learning tasks, and argue that many of the hard issues benefit from the causal viewpoint. This includes domain adaptation, semi-supervised learning, transfer, life-long learning, and fairness, as well as an application to the removal of systematic errors in astronomical problems.

Semantic Scholar page: Bernhard Schölkopf

 

Title: Revisiting the PC-Algorithm: How Much Conditioning Is Needed?
Speaker: Ali Shojaie
When: Tuesday, May 9, 2017 - 12:00
Location: CSE 305
The PC-Algorithm, named after its inventors, Peter Sprites and Clark Glymour, is the gold standard for learning directed acyclic graphs (DAGs) from observational data. The algorithm was recently popularized by Kalisch and Bulhmann for learning sparse high-dimensional DAGs, containing many nodes, from smaller number of observations. However, the computational and sample complexity of the PC-Algorithm scale with the maximum degree of the network. The maximum degree grows with the number of nodes in many random graph families, including power-law graphs commonly observed in real-world networks. The algorithm is thus inappropriate for learning such networks. Moreover, the PC-Algorithm requires a stringent faithfulness assumption, which has been shown not to hold in high dimensions.

In this work, we exploit properties of large random graphs to show that DAGs can be learned by only conditioning on sets of small cardinality. In other words, for many graph families, we justify early stopping of PC-Algorithm’s search. While guaranteed to learn the skeleton of high-dimensional DAGs, this simple modification also turns out to greatly improve the computational and sample complexity of the PC-Algorithm. The new algorithm also requires a weaker faithfulness assumption and results in improved empirical performance in DAGs with hub nodes.

Semantic Scholar page: Ali Shojaie

 

Title: Continuous State Machines and Grammars for Linguistic Structure Prediction
Speaker: Noah Smith
When: Tuesday, April 25, 2017 - 12:00
Location: CSE 305
Linguistic structure prediction infers abstract representations of text, like syntax trees and semantic graphs, enabling interpretation in applications like question answering, information extraction, and opinion analysis. This talk is about the latest family of methods for linguistic structure prediction, which make heavy use of representation learning via neural networks. I'll present these new methods as continuous generalizations of state machines and probabilistic grammars. I'll show how they've led to fast and accurate performance on several syntactic and semantic parsing problems.

Semantic scholar page: Noah Smith

 

Title: Sequence Modeling: From Spectral Methods and Bayesian Nonparametrics to Deep Learning
Speaker: Alex Smola
When: Tuesday, April 11, 2017 - 12:00
Location: MOR 220
In this talk, Alex Smola will summarize a few recent developments in the design and analysis of sequence models. Starting with simple parametric models such as HMMs for sequences we look at nonparametric extensions in terms of their ability to model more fine-grained types of state and transition behavior. In particular we consider spectral embeddings, nonparametric Bayesian models such as the nested Chinese Restaurant Franchise and the Dirichlet-Hawkes Process. We conclude with a discussion of deep sequence models for user return time modeling, time-dependent collaborative filtering, and large-vocabulary user profiling.

 

Title: Streaming Optimization Methods: Stochastic Gradient Descent, Momentum, and Newton's Method
Speaker: Sham Kakade
When: Tuesday, February 21, 2017 - 12:00
Location: CSE 305
Theory and practice differ widely in our more basic optimization algorithms, such as stochastic gradient descent. For example, for the practically relevant problem of setting parameters (such as learning rates), principled methods are almost never used in practice. Can we get a better a handle of the true behavior this widely used algorithm (as opposed to worst case analysis)? Can we use these insights to provide better algorithms?

This talk by Sham Kakade will survey recent work on this question. In particular, we seek to understand how fast we can parallelize these algorithms using simple averaging and mini-batching procedures. The talk will also discuss ongoing work with regards to streaming methods for momentum (i.e. acceleration), for implementing second order methods, and for non-convex optimization.

 

Title: Physics-based Visual Reasoning
Speaker: Roozbeh Mottaghi (AI2)
When: Tuesday, February 7, 2017 - 12:00
Location: CSE 305
Despite recent progress, AI is still far from understanding the physics of the world, and there is a large gap between the abilities of humans and the state-of-the-art methods. In this talk, I will focus on physics-based scene understanding and visual reasoning, which is a crucial next step in computer vision and AI.

Bio: Roozbeh Mottaghi is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). Prior to joining AI2, he was a post-doctoral researcher at the Computer Science Department at Stanford University. He obtained his PhD in Computer Science in 2013 from University of California, Los Angeles. His research is mainly focused on computer vision and machine learning.