Abstract: Confidently making progress on multilingual modeling requires challenging, trustworthy evaluations. We present TyDi QA —a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology—the set of linguistic features each language expresses—such that we expect models performing well on this set to generalize across a large number of the world’s languages.

We present a quantitative analysis of the data quality and example-level qualitative linguistic analyses of observed language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, and the data is collected directly in each language without the use of translation.

In this talk, we’ll argue for the immediate practical value of building QA systems that work for people that don’t speak English, the modeling challenge of coping with lower-resource languages, and the scientific value of learning what it takes to model the variety presented by human languages.

Bio: Jonathan Clark is a research scientist at Google Research in Seattle. His research goal is to build NLP systems that are helpful to people regardless of what language they speak. Previously, he was a member of the machine translation team at Microsoft Research, helping to build Skype Translator and Microsoft Custom Translator. He holds a Ph.D. from Carnegie Mellon University.

GitHub (baselines & eval code): https://github.com/google-research-datasets/tydiqa
Website & glossed examples: https://ai.google.com/research/tydiqa/
Paper: https://arxiv.org/abs/2003.05002
Speaker: 
Jonathan Clark, Google AI
Time/Date: 
Thursday, May 14, 2020 - 11:00
Location: 
Virtual
Tags: