Machine Learning Seminars

Sponsored by

Title: Learning Semantic Parsers for More Languages and with Less Supervision
Speaker: Luke Zettlemoyer, University of Washington
When: Tuesday, November 15, 2011 - 12:30
Location: CSE 305

Recent work has demonstrated effective learning algorithms for a variety of semantic parsing problems, where the goal is to automatically recover the underlying meaning of input sentences. Although these algorithms can work well, there is still a large cost in annotating data and gathering other language-specific resources for each new application. In this talk, I will describe efforts to address these challenges by developing scalable, probabilistic CCG grammar induction algorithms. I will present recent work on methods that incorporate new notions of lexical generalization, thereby enabling effective learning for a variety of different natural languages and formal meaning representations. I will also describe a new approach for learning semantic parsers from conversational data, which does not require any manual annotation of sentence meaning. Finally, I will sketch future directions, including our recurring focus on building scalable learning techniques while attempting to minimize the application-specific engineering effort. Joint work with Yoav Artzi, Tom Kwiatkowski, Sharon Goldwater, and Mark Steedman

 

Title: Speech Recognition with Segmental Conditional Random Fields
Speaker: Geoff Zweig, Microsoft Research
When: Tuesday, November 1, 2011 - 12:30
Location: CSE 305

Segmental Conditional Random Fields (SCRFs) are a mathematical model in which an observation sequence is segmented into labeled chunks. For example, an audio sequence may be segmented into words, or a video segmented into scenes. These models have two key characteristics. First, in contrast to frame-based systems, long-span features can be used - features which relate an entire span of observations to a label, and which may make reference to precisely defined segment boundaries. Second, the models are log-linear in form, and allow for the convenient integration of features derived from heterogeneous information sources. This talk will provide an overview of SCRFs, and in particular their use in speech recognition. First, we describe the basic model. Then, we present results in which SCRFs are used to add new information sources to improve the performance of a state-of-the-art system. Finally, we present initial results in which the framework is used to do recognition from the ground up.

 

Title: Penalized Methods for Estimation of High Dimensional Directed (cyclic) Graphs
Speaker: Ali Shojaie, Department of Biostatistics, University of Washington
When: Tuesday, October 18, 2011 - 12:30
Location: CSE 305

In graphical models theory, directed edges often represent causal relationships among random variables and discovering such relationships is of main interest in many areas of application, including biological and social problems. In addition to challenges in estimating directed graphs due to high dimensionality of the space and inadequacy of observational data for estimating the direction of causation, presence of cycles in graphs further complicates application of statistical methodologies to estimation of directed graphs. I will start by describing an efficient (polynomial time) algorithm, based on penalized likelihood methods, for estimation of directed graphs with known topological ordering, and discuss extensions of this methodology for estimation of general directed graphs, using (i) temporal observations, and (ii) joint perturbation screens and steady-state observations of the systems. Asymptotic properties of the proposed estimators and their applications in the context of gene regulatory networks are further discussed.

 

Title: Direct Loss Minimization for Structural Labeling with Applications to Speech Recognition
Speaker: Joseph Keshet, TTI-Chicago
When: Tuesday, October 4, 2011 - 12:30
Location: CSE 305

In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss. In binary classification one typically tries to minimizes the error rate. But in structured prediction each task often has its own measure of performance such as the BLEU score in machine translation or word error rate in speech recognition. The most common approaches to structured prediction, structural SVMs and CRFs, do not minimize the task loss: the former minimizes a surrogate loss with no guarantees for task loss and the latter minimizes log loss independent of task loss. In the first part of the talk we present a theorem stating that a certain perceptron-like learning rule, involving features vectors derived from loss-adjusted inference, directly corresponds to the gradient of task loss. Empirical results on phonetic alignment are presented surpassing all previously reported results on this problem on the TIMIT corpus. In the second part of the talk, we describe a new algorithm which aims to minimize the regularized task loss. We state a PAC-Bayesian generalization bound, which gives an upper-bound on the expected task loss in terms of the empirical task loss. Our algorithm is derived by finding the gradient of the PAC-Bayesian bound and minimizing it by stochastic gradient descent. The resulting algorithm is iterative and easy to implement. Experiments on phoneme recognition on the TIMIT corpus, when the task loss is chosen to be the phoneme error rate, show that our method achieves the lowest phoneme error rate compared to other discriminative and generative models with the same expressive power. Joint work with David McAllester and Tamir Hazan.

 

Title: Learning with Exploration
Speaker: John Langford, Yahoo! Research
When: Tuesday, June 28, 2011 - 12:30
Location: CSE 305

In the real world, we are constantly presented with problems where we first see some information, then make a choice, and then get feedback about that choice. Direct examples of this occur in medical testing and online ad auctions. The standard online supervised learning setting is almost the same, except that it provides feedback about all choices. With this weaker form of feedback provided by the real world, how can we best learn?

 

Title: Convex Relaxation and High-Dimensional Matrices
Speaker: Martin Wainwright (UC Berkeley)
When: Tuesday, May 31, 2011 - 12:30
Location: EE 403

Problems that require estimating high-dimensional matrices from noisy observations arise frequently in statistics and machine learning. Examples include dimensionality reduction methods (e.g., principal components and canonical correlation), collaborative filtering and matrix completion (e.g., Netflix and other recommender systems), system identification of state-space models, and graphical model learning. When the sample size is less than the matrix dimensions, all of these problems are ill-posed, so that some type of structure is required in order to obtain interesting results. In recent years, relaxations based on the nuclear norm and other types of convex matrix regularizers have become popular. By framing a broad class of problems as special cases of matrix regression, we present a single theoretical result that provides guarantees on the accuracy of such convex relaxations. Two properties of the convex relaxation turn out to be essential: decomposability of the regularizer, and restricted strong convexity of the loss function. Our general result can be specialized to obtain various explicit and non-asymptotic bounds, among them sharp rates for noisy forms of matrix sketching, completion, and decomposition. Based on joint work with Sahand Negahban.

 

Title: Learning Transformations for Search and Adaptation
Speaker: Brian Kulis (UC Berkeley)
When: Tuesday, May 17, 2011 - 12:30
Location: EE 403

There are several scenarios where one needs to construct or learn a transformation of data from one space to another. One example is auto-tagging of images, where an image is mapped from image space to text space to find the most relevant words to describe an image. Another example is metric learning, where one maps all of the data from one space to another such that standard distance functions in the mapped space better capture the desired distance structure of the data. In this talk, I will describe some of my recent work in analyzing and developing techniques for the transformation learning problem. I will describe a general formulation for learning linear transformations, and then show that this formulation may be efficiently adapted to learn non-linear transformations through kernelization. Crucially, this kernelization yields transformations that can be applied inductively to new data, and the analysis generalizes a significant amount of previous research in this area. I will then describe some specific instantiations of this formulation for the problems of metric learning and domain adaptation. I will show some results on a variety of vision tasks, and will discuss connections to related work on online learning and hashing techniques.

 

Title: Language: How Far Can We Get With Unlabeled Data?
Speaker: Chris Burges (Microsoft Research)
When: Tuesday, May 10, 2011 - 12:30
Location: EE 403

Unlabeled text data is plentiful and computing power continues to grow exponentially. The accuracy difference between competing supervised learning methods tends to decline as more data becomes available, indicating that more data can be more important than cleverer algorithms. "AskMSR" showed that leveraging unlabeled textual Web data can do an impressive job at question-answering. Recent work has shown that vector space models combined with neural nets can solve standard natural language problems well, and very recent work using recurrent neural nets to predict sequences of characters gives very intriguing results. Inspired by all of this, we are investigating the following question: what is the simplest vector space model we can build that (1) learns from unlabeled data, (2) represents both words and sentences as vectors, (3) is generative (it can efficiently map sentence vectors to text), (4) is as simple as possible but can still do something interesting? This work is still in early development.

 

Title: Sum-Product Networks: A Principled Approach to Deep Learning
Speaker: Pedro Domingos, University of Washington
When: Tuesday, April 19, 2011 - 12:30
Location: EE 403

The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs). SPNs are directed acyclic graphs with variables as leaves, sums and products as internal nodes, and weighted edges. We show that if an SPN is complete and consistent it represents the partition function and all marginals of some graphical model, and give semantics to its nodes. Essentially all tractable graphical models can be cast as SPNs, but SPNs are also strictly more general. We then propose learning algorithms for SPNs, based on backpropagation and EM. Experiments show that inference and learning with SPNs can be both faster and more accurate than with previous deep architectures. For example, SPNs perform face completion better than state-of-the-art deep networks for this task. SPNs also have intriguing potential connections to the architecture of the cortex. (Joint work with Hoifung Poon.)

 

Title: Deep Convex Network: Architectures and Parallelizable Learning
Speaker: Li Deng, MSR
When: Tuesday, April 5, 2011 - 12:30
Location: EE 403

We recently developed a DBN-HMM (Deep Belief Net/Hidden Markov Model) architecture for large-scale speech recognition with three steps in learning and decoding: 1) Stacked RBM pre-training; 2) Stochastic gradient descent back-propagation in fine tuning of DBN; and 3) Tying DBN weights in the phone-bound temporal dimension and using dynamic programming to arrive at the decision rule for speech recognition (Dahl, Yu, and Deng, ICASSP 2011 and IEEE Trans. Audio, Speech and Language Proc., 2011). While achieving remarkable success with this approach compared with the state-of-the-art in discriminatively trained HMM, we are faced with the seemingly insurmountable problem of scalability in dealing with virtually unlimited amount of speech training data in all sorts of acoustic environments. The main difficulty comes from the stochastic gradient learning algorithm in the fine tuning step of DBN. This step of the DBN-HMM learning algorithm is extremely difficult to parallelize over many machines with practical advantages, after some detailed exploration. To overcome the scalability challenge, we have recently developed a novel deep learning architecture, which we call deep convex network (DCN). The learning algorithm in the DCN is a convex one in each layer of the DCN. And the learning algorithm is batch-based instead of stochastic, naturally lending it amenable for parallelization. The construction of the DCN is related to our earlier work on building the deep hidden CRF (Yu, Wang, Deng, IEEE J. Selected Topics in Signal Proc., Dec 2010), but the learning algorithm is much simpler in each layer, enabling the use of hundreds of layers in the overall DCN architecture. We have empirically found that without doing global error fitting over all layers, the DCN can already achieve excellent discrimination results as long as a very deep architecture is built. One version of the algorithm contains a module that makes use of nonlinear random projection inspired by the work of extreme learning machine (Huang, 2006). While this module gives reasonably good results after it is embedded in the deep structure, we found that replacing it with the restricted Boltzmann machine (RBM) contributes to over 30% of error reduction in the image recognition (MNIST) experiment, achieving the state of the art performance in this task. More recent experiments on speech data will be discussed also.