Tuesday, April 3, 2018 - 12:00

Active-set complexity of proximal-gradient

Speaker: Mark Schmidt (UBC)
Location: CSE 403
Proximal gradient methods have been found to be highly effective for solving minimization problems with non-negative constraints or L1-regularization. Under suitable non-degeneracy conditions, it is known that these algorithms identify the optimal sparsity pattern for these types of problems in a finite number of iterations. However, it is not known how many iterations this may take. We introduce the notion of the "active-set complexity", which in these cases is the number of iterations before an algorithm is guaranteed to have identified the final sparsity pattern. We further give a bound on the active-set complexity of proximal gradient methods in the common case of minimizing the sum of a strongly-convex smooth function and a separable convex non-smooth function.

Tuesday, February 20, 2018 - 12:00

Adaptive Averaging in Accelerated Descent Dynamics

Speaker: Walid Krichene (Google)
Location: CSE 403
We study accelerated descent dynamics for constrained convex optimization. This dynamics can be described naturally as a coupling of a dual variable accumulating gradients at a given rate eta(t), and a primal variable obtained as the weighted average of the mirrored dual trajectory, with weights w(t). Using a Lyapunov argument, we give sufficient conditions on η and w to achieve a desired convergence rate. As an example, we show that the replicator dynamics (an example of mirror descent on the simplex) can be accelerated using a simple averaging scheme. We then propose an adaptive averaging heuristic which adaptively computes the weights to speed up the decrease of the Lyapunov function. We provide guarantees on adaptive averaging in continuous-time, and give numerical experiments in discrete-time to compare it with existing heuristics, such as adaptive restarting. The experiments indicate that adaptive averaging performs at least as well as adaptive restarting, with significant improvements in some cases.

Tuesday, November 28, 2017 - 12:15

Unified optimization for self-learning robust penalties

Speaker: Sasha Aravkin
Location: CSE 403
We consider extended optimization formulations that unify tasks classically done sequentially or hierarchically. In particular, we develop self-tuning robust penalties, that require optimizing over both model parameters and shape parameters (which are typically computed with cross-validation or black-box optimization). If time permits, we will also show extended formulations that simultaneously learn a model and detect outliers in the input data. Bio: Aleksandr Aravkin is a Washington Research Foundation Data Science Assistant Professor in Applied Mathematics at the University of Washington. He is also a Fellow at the UW eSciences Institute, and an adjunct professor in the Mathematics and Statistics. He received his M.S. in Statistics and PhD in Mathematics (Optimization) from the University of Washington in 2010. From 2010-2012, he was a postdoctoral fellow in Computer Science and Earth and Ocean Sciences at UBC, working on robust approaches for large-scale inverse problems. From 2012-2015, Dr. Aravkin was a Research Staff Member at the IBM TJ Watson Research center, and an adjunct professor at Columbia University in Computer Science and IEOR departments.

Tuesday, October 17, 2017 - 12:00

Density Tree and Density Ranking in Singular Measures

Speaker: Yen-Chi Chen
Location: CSE 403
A density tree (also known as a cluster tree of a probability density function) is a tool in topological data analysis that uses a tree structure to represent the shape of a density function. Even if the density function is multivariate, a density tree can always be displayed on a two-dimensional plane, making it an ideal tool for visualizing the shape of a multivariate dataset. However, in complex datasets such as GPS data, the underlying distribution function is singular so the usual density function and density tree no longer exist. To analyze this type of data and generalize the density tree, we introduce the concept of density ranking and ranking tree (also called an alpha-tree). We then show that one can consistently estimate the density ranking and the ranking tree using a kernel density estimator. Based on the density ranking, we introduce several geometric and topological summary curves for analyzing GPS datasets. Bio: Yen-Chi Chen is an Assistant Professor in the Department of Statistics, a data science fellow in the eScience Institute, and a statistician in the National Alzheimer's Coordinating Center at the University of Washington. Yen-Chi obtained a PhD in Statistics from Carnegie Mellon University in June 2016, advised by Professor Larry Wasserman and Christopher Genovese. His current research is on nonparametric statistics, topological data analysis, cluster analysis, and applications in Astronomy, industrial engineering, and medical research.

Wednesday, August 16, 2017 - 15:00

Causal Learning

Speaker: Bernhard Schölkopf
Location: CSE 305
In machine learning, we use data to automatically find dependences in the world, with the goal of predicting future observations. Most machine learning methods build on statistics, but one can also try to go beyond this, assaying causal structures underlying statistical dependences. Can such causal knowledge help prediction in machine learning tasks? We argue that this is indeed the case, due to the fact that causal models are more robust to changes that occur in real world datasets. We discuss implications of causality for machine learning tasks, and argue that many of the hard issues benefit from the causal viewpoint. This includes domain adaptation, semi-supervised learning, transfer, life-long learning, and fairness, as well as an application to the removal of systematic errors in astronomical problems.

Semantic Scholar page: Bernhard Schölkopf

Tuesday, May 9, 2017 - 12:00

Revisiting the PC-Algorithm: How Much Conditioning Is Needed?

Speaker: Ali Shojaie
Location: CSE 305
The PC-Algorithm, named after its inventors, Peter Sprites and Clark Glymour, is the gold standard for learning directed acyclic graphs (DAGs) from observational data. The algorithm was recently popularized by Kalisch and Bulhmann for learning sparse high-dimensional DAGs, containing many nodes, from smaller number of observations. However, the computational and sample complexity of the PC-Algorithm scale with the maximum degree of the network. The maximum degree grows with the number of nodes in many random graph families, including power-law graphs commonly observed in real-world networks. The algorithm is thus inappropriate for learning such networks. Moreover, the PC-Algorithm requires a stringent faithfulness assumption, which has been shown not to hold in high dimensions.

In this work, we exploit properties of large random graphs to show that DAGs can be learned by only conditioning on sets of small cardinality. In other words, for many graph families, we justify early stopping of PC-Algorithm’s search. While guaranteed to learn the skeleton of high-dimensional DAGs, this simple modification also turns out to greatly improve the computational and sample complexity of the PC-Algorithm. The new algorithm also requires a weaker faithfulness assumption and results in improved empirical performance in DAGs with hub nodes.

Semantic Scholar page: Ali Shojaie

Tuesday, April 25, 2017 - 12:00

Continuous State Machines and Grammars for Linguistic Structure Prediction

Speaker: Noah Smith
Location: CSE 305
Linguistic structure prediction infers abstract representations of text, like syntax trees and semantic graphs, enabling interpretation in applications like question answering, information extraction, and opinion analysis. This talk is about the latest family of methods for linguistic structure prediction, which make heavy use of representation learning via neural networks. I'll present these new methods as continuous generalizations of state machines and probabilistic grammars. I'll show how they've led to fast and accurate performance on several syntactic and semantic parsing problems.

Semantic scholar page: Noah Smith

Tuesday, April 11, 2017 - 12:00

Sequence Modeling: From Spectral Methods and Bayesian Nonparametrics to Deep Learning

Speaker: Alex Smola
Location: MOR 220
In this talk, Alex Smola will summarize a few recent developments in the design and analysis of sequence models. Starting with simple parametric models such as HMMs for sequences we look at nonparametric extensions in terms of their ability to model more fine-grained types of state and transition behavior. In particular we consider spectral embeddings, nonparametric Bayesian models such as the nested Chinese Restaurant Franchise and the Dirichlet-Hawkes Process. We conclude with a discussion of deep sequence models for user return time modeling, time-dependent collaborative filtering, and large-vocabulary user profiling.

Tuesday, February 21, 2017 - 12:00

Streaming Optimization Methods: Stochastic Gradient Descent, Momentum, and Newton's Method

Speaker: Sham Kakade
Location: CSE 305
Theory and practice differ widely in our more basic optimization algorithms, such as stochastic gradient descent. For example, for the practically relevant problem of setting parameters (such as learning rates), principled methods are almost never used in practice. Can we get a better a handle of the true behavior this widely used algorithm (as opposed to worst case analysis)? Can we use these insights to provide better algorithms?

This talk by Sham Kakade will survey recent work on this question. In particular, we seek to understand how fast we can parallelize these algorithms using simple averaging and mini-batching procedures. The talk will also discuss ongoing work with regards to streaming methods for momentum (i.e. acceleration), for implementing second order methods, and for non-convex optimization.

Tuesday, February 7, 2017 - 12:00

Physics-based Visual Reasoning

Speaker: Roozbeh Mottaghi (AI2)
Location: CSE 305
Despite recent progress, AI is still far from understanding the physics of the world, and there is a large gap between the abilities of humans and the state-of-the-art methods. In this talk, I will focus on physics-based scene understanding and visual reasoning, which is a crucial next step in computer vision and AI.

Bio: Roozbeh Mottaghi is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). Prior to joining AI2, he was a post-doctoral researcher at the Computer Science Department at Stanford University. He obtained his PhD in Computer Science in 2013 from University of California, Los Angeles. His research is mainly focused on computer vision and machine learning.

Machine Learning Seminars

Sponsored by