Abstract: To address knowledge-rich tasks such as question answering and fact checking, NLP models should combine knowledge from multiple sources – memorized knowledge in the language model and passages retrieved from an evidence corpus. Contrary to this, prior work has made simplifying assumptions that knowledge sources are consistent with each other, up-to-date, and available. In this talk, I will discuss challenges and opportunities for building a NLP model in a real world that is open-ended and constantly changing. I will first describe how existing models behave under different types of knowledge conflicts. Then, I will propose paths forward for handling knowledge conflicts -- teaching models to detect conflicting information and generating paragraph-level answers that can elaborate multiple viewpoints.

Bio: Eunsol Choi is an assistant professor in the Computer Science department at the University of Texas at Austin and a visiting researcher at Google AI. Her research area spans natural language processing and machine learning. She is particularly interested in interpreting and reasoning about text in a rich real world context. She received a Ph.D. from University of Washington and B.A from Cornell University. She is a recipient of Facebook research fellowship, Google faculty research award, and outstanding paper award at EMNLP 2021.
Eunsol Choi
Thursday, December 1, 2022 - 10:00
Allen Center 305 and Zoom https://ucsc.zoom.us/j/92375270528?pwd=M0NGY0dVWEZKT3JlWHpVUGlIWG83Zz09