Abstract: The task of machine reading comprehension, asking a machine questions about a passage of text to probe its understanding, has seen a dramatic surge in popularity in recent years. According to some metrics, we now have machines that perform as well as humans on this task. Yet no serious researcher actually believes that machines can read, despite their performance on some reading comprehension benchmarks. What would it take to convince ourselves that a machine understood a passage of text? Can we devise a benchmark that would let us measure progress towards that goal? In this talk I try to outline what such a benchmark might look like, and share some initial progress towards building one.

Bio: Matt is a senior research scientist at the Allen Institute for AI on the AllenNLP team. His research focuses primarily on getting computers to read and answer questions, dealing both with open domain reading comprehension and with understanding question semantics in terms of some formal grounding (semantic parsing). He is particularly interested in cases where these two problems intersect, doing some kind of reasoning over open domain text. He is the original architect of the AllenNLP toolkit, and he co-hosts the NLP Highlights podcast with Waleed Ammar and Pradeep Dasigi.
Speaker: 
Matt Gardner, AI2 Irvine
Time/Date: 
Thursday, December 12, 2019 - 12:00
Location: 
CSE 305
Tags: