Title: Simple and Effective Multi-Paragraph Reading Comprehension

Advisors: Luke Zettlemoyer and Matt Gardner (AI2)

Abstract: Recent work has shown that deep learning models can be used to answers questions by reading a related paragraph. In this talk we consider the task of extending these models to work when entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. A demo of our system is available at documentqa.allenai.org.

CSE 203
Monday, November 27, 2017 - 14:00 to 15:30