Title: Span-based Neural Structured Prediction

Advisor: Luke Zettlemoyer

Supervisory Committee: Luke Zettlemoyer (Chair), Mari Ostendorf (GSR, EE), Yejin Choi, and Noah Smith

Abstract: 

A long-standing goal in artificial intelligence is for machines to understand natural language. With ever-growing amounts of data in the world, it is crucial to automate many aspects of language understanding so that users can make sense of this data in the face of information overload. The main challenge stems from the fact that language, either in the form of speech or text, is inherently unstructured. Without programmatic access to the semantics of natural language, it is challenging to build general, robust systems that are usable in practice.

Towards achieving this goal, we propose a series of neural structured-prediction algorithms for natural language processing. In particular, we address a challenge common to all such algorithms: the space of possible output structures can be extremely large, and inference in this space can be intractable. Despite the seeming incompatibility of neural representations with dynamic programs from traditional structured prediction algorithms, we can leverage these rich representations to learn more accurate models while using simpler or lazier inference.

We focus on inference algorithms over the most basic substructure of language: spans of text. We present state-of-the-art models for tasks that require modeling the internal structure of spans, such as syntactic parsing, and modeling structure between spans, such as question-answering and coreference resolution. The proposed techniques are applicable to many structured prediction problems and we expect that they will further push the limits of neural structured prediction for natural language processing.

Place: 
CSE 403
When: 
Wednesday, December 6, 2017 - 11:30 to 13:30