Title: Neural Models for Large-Scale Semantic Role Labeling

Advisor: Luke Zettlemoyer

Supervisory Committee: Luke Zettlemoyer (Chair), Mari Ostendorf (GSR, EE), Yejin Choi, and Noah Smith

Abstract: Learning semantic parsers to predict the predicate-argument structures of a sentence is an important open problem in Natural Language Processing. A key challenge in this task is sparsity of labeled data: a given predicate-argument instance may only occur a handful of times in the training set. This problem is exacerbated by the fact that traditional predicate-argument annotation schemes are complicated and difficult to gather, requiring the use of trained expert annotators capable of executing the complex task specifications. In this talk, I’ll discuss two projects which address these issues. First, I'll discuss neural models which embed SRL labels in a shared embedding space, achieving state-of-the-art results through multitask training. Second, I’ll discuss Large-scale QA-SRL parsing, with a new massive dataset of crowd-sourced question-answer based SRL annotations gathered at a fraction of the cost of traditional SRL annotation methods, and accompanying neural models which produce high quality predicate-argument structures.

Place: 
CSE 203
When: 
Tuesday, August 14, 2018 - 12:00 to Thursday, March 28, 2024 - 17:08