Tuesday, February 28, 2012 - 12:30
Causality
Speaker: Thomas Richardon, University of Washington
Location: EEB 037
As these talks are intended as overviews, time permitting, I plan to give two mini-tutorials: The first will be on the idea of counterfactuals/potential outcomes. Basically answering the question: how does data obtained from a simple randomized experiment differ from that obtained from an observational study, and how does that weaken inferences that can be obtained. The second will require assume a little bit of background on Bayesian networks and will answer the question: if we obtain data from a subset of the variables in a causal Bayesian network, which causal effects are identified and how can they be computed efficiently.
Tuesday, February 21, 2012 - 12:30
Counting and Sampling Solutions of Combinatorial Problems
Speaker: Ashish Sabharwal, IBM T,J, Watson
Location: EEB 037
This tutorial presents a survey of algorithms that are used for counting and sampling of SAT problems.
Tuesday, February 14, 2012 - 12:30
Knowledge Extraction for Biomedical Text
Speaker: Hoifung Poon, Microsoft Research
Location:
This tutorial summarizes the literature on knowledge extraction in scientific domains, such as biomedical texts.
Tuesday, February 7, 2012 - 12:30
Machine Learning for Information Retrieval
Speaker: Niranjan Balasubramanian, University of Washington
Location: EEB 037
This tutorial discusses the machine learning techniques popular in the information retrieval subcommunity such as ranking techniques, pagerank, etc.
Tuesday, January 31, 2012 - 12:30
Trajectory Optimization with Differential Dynamic Programming
Speaker: Tom Erez, University of Washington
Location: EEB 037
This tutorial gives an introduction to the control theory, in particular, discussing the trajectory optimization techniques.
Tuesday, January 24, 2012 - 12:30
Latent Variable Models of Lexical Semantics
Speaker: Alan Ritter, University of Washington
Location: EEB 037
This tutorial discusses probabilistic models popular in the NLP lexical semantics community.
Tuesday, January 17, 2012 - 12:30
Latent Factor Models for Relational and Network Data
Speaker: Peter Hoff, University of Washington
Location: EEB 037
This tutorial discusses probabilistic models for social and other network data.
Tuesday, January 10, 2012 - 12:30
Submodular Functions and Active Learning
Speaker: Andrew Guillory, University of Washington
Location: EEB 037
This tutorial presents a brief survey of active learning, submodular functions, and the interesting algorithms and analyses at their intersection. Minimal background knowledge is assumed, and emphasis is placed on open problems and gaps between theory and practice. Slides at: http://ml.cs.washington.edu/www/media/presentations/submodularity_tutori...
Tuesday, November 29, 2011 - 12:30
Machine Learning and Big Data Analysis
Speaker: Alice Zheng
Location: CSE 305
Our world is becoming more data driven. With the spread of ubiquitous sensors, network connectivity, and massive storage capabilities, we are able to collect more and more data. But our computation and analysis capabilities have not increased at a comparable rate. Computer scientists are facing looming questions such as "How do we deal with the massive amounts of data we are collecting? How can we extract value out of data?" A sub-question relevant to machine learning researchers is "what role will machine learning and data mining play?" Through a survey of current sources of Big Data and analysis workflow patterns, this talk aims to shed light on the latter question.
Tuesday, November 22, 2011 - 12:30
Entire Relaxation Path for Maximum Entropy Models
Speaker: Yoram Singer
Location: CSE 305
We describe a relaxed and generalized notion of maximum entropy problems for multinomial distributions. By introducing a simple re-parametrization we are able to derive an efficient homotopy tracking scheme for the entire relaxation path using linear space and quadratic time. We also show that the Legendre dual of the relaxed maximum entropy problem is the task of finding the maximum likelihood estimator for an exponential distribution with L1 regularization. Hence, our solution can be used for problems such as language modeling with sparse parameter representation.