Billions of lines of source code have been written, many of which are freely available on the Internet. This code contains a wealth of implicit knowledge about how to write software that is easy to read, avoids common bugs, and uses popular libraries effectively. We want to extract this implicit knowledge by analyzing source code text. To do this, we employ the same tools from machine learning and natural language processing that have been applied successfully to natural language text. After all, source code is also a means of human communication. We present three new software engineering tools inspired by this insight: * Naturalize, a system that learns local coding conventions. It proposes revisions to names and to formatting so as to make code more consistent. A version that uses word embeddings has shown promise toward naming methods and classes. * Data mining methods have been widely applied to summarize the patterns about how programmers invoke libraries and APIs. We present a new method for mining market basket data, based on a simple generative probabilistic model, that resolves fundamental statistical pathologies that lurk in popular current data mining techniques. * HAGGIS, a system that learns local recurring syntactic patterns, which we call idioms. HAGGIS accomplishes this using a nonparametric Bayesian tree substitution grammar, and is delicious with whisky sauce. Bio Charles Sutton is a Reader (equivalent to Associate Professor: http://bit.ly/1W9UhqT) at the University of Edinburgh. He is interested in a broad range of applications of probabilistic machine learning, including NLP, analysis of computer systems, software engineering, sustainable energy, and exploratory data analysis. Dr Sutton completed his PhD at the University of Massachusetts Amherst, working with Andrew McCallum. He did postdoctoral research at the University of California Berkeley, working with Michael I Jordan. He is Deputy Director of the EPSRC Centre for Doctoral Training in Data Science at the University of Edinburgh.
Wednesday, October 28, 2015 - 11:00
Speaker: Charles Sutton, University of Edinburgh
Location: CSE 403
Thursday, October 8, 2015 - 10:30
Speaker: Miguel Ballesteros (Universitat Pompeu Fabra and Carnegie Mellon University)
Location: CSE 305
We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks---the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, We discuss different word representations, by modelling words and by modelling characters, the former is useful for all languages while the latter improves the way of handling out of vocabulary words without a pretraining regime and improves the parsing of morphologically rich languages.
Friday, October 2, 2015 - 10:30
Speaker: Jordan Boyd-Graber (University of Colorado at Boulder)
Location: SIG 134
In this talk, I'll discuss two real-world language applications that require "thinking on your feet": synchronous machine translation (or "machine simultaneous interpretation") and question answering (when questions are revealed one piece at a time). In both cases, effective algorithms for these tasks must interrupt the input stream and decide when to provide output. Synchronous machine translation is when a sentence is being produced one word at a time in a foreign language and we want to produce a translation in English simultaneously (i.e., with as little delay between a foreign language word and its English translation). This is particularly difficult in verb-final languages like German or Japanese, where an English translation can barely begin until the verb is seen. Effective translation thus requires predictions of unseen elements of the sentence (e.g., the main verb in German and Japanese, or relative clauses in Japanese, or post-positions in Japanese). We use reinforcement learning to decide when to trust our verb predictions. It must learn to balance incorrect translation versus timely translations, and must use those predictions to translate the sentence. For question answering, we use a specially designed dataset that challenges humans: a trivia game called quiz bowl. These questions are written so that they can be interrupted by someone who knows more about the answer; that is, harder clues are at the start of the question and easier clues are at the end of the question. We create a recursive neural network to predict answers from incomplete questions and use reinforcement learning to decide when to guess. We are able to answer questions earlier in the questions than most college trivia contestants. Bio: Jordan Boyd-Graber is an assistant professor in the University of Colorado Boulder's Computer Science Department, formerly serving as an assistant professor at the University of Maryland. He is a 2010 graduate of Princeton University, with a PhD thesis on "Linguistic Extensions of Topic Models" working under David Blei.