Thursday, October 8, 2015 - 10:30

Speaker: Miguel Ballesteros (Universitat Pompeu Fabra and Carnegie Mellon University)
Location: CSE 305
We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks---the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, We discuss different word representations, by modelling words and by modelling characters, the former is useful for all languages while the latter improves the way of handling out of vocabulary words without a pretraining regime and improves the parsing of morphologically rich languages.

Friday, October 2, 2015 - 10:30

Speaker: Jordan Boyd-Graber (University of Colorado at Boulder)
Location: SIG 134
In this talk, I'll discuss two real-world language applications that require "thinking on your feet": synchronous machine translation (or "machine simultaneous interpretation") and question answering (when questions are revealed one piece at a time). In both cases, effective algorithms for these tasks must interrupt the input stream and decide when to provide output. Synchronous machine translation is when a sentence is being produced one word at a time in a foreign language and we want to produce a translation in English simultaneously (i.e., with as little delay between a foreign language word and its English translation). This is particularly difficult in verb-final languages like German or Japanese, where an English translation can barely begin until the verb is seen. Effective translation thus requires predictions of unseen elements of the sentence (e.g., the main verb in German and Japanese, or relative clauses in Japanese, or post-positions in Japanese). We use reinforcement learning to decide when to trust our verb predictions. It must learn to balance incorrect translation versus timely translations, and must use those predictions to translate the sentence. For question answering, we use a specially designed dataset that challenges humans: a trivia game called quiz bowl. These questions are written so that they can be interrupted by someone who knows more about the answer; that is, harder clues are at the start of the question and easier clues are at the end of the question. We create a recursive neural network to predict answers from incomplete questions and use reinforcement learning to decide when to guess. We are able to answer questions earlier in the questions than most college trivia contestants. Bio: Jordan Boyd-Graber is an assistant professor in the University of Colorado Boulder's Computer Science Department, formerly serving as an assistant professor at the University of Maryland. He is a 2010 graduate of Princeton University, with a PhD thesis on "Linguistic Extensions of Topic Models" working under David Blei.