Abstract: Today, the most common approach for training neural networks involves minimizing task loss on large datasets. While this agenda has been undeniably successful, we may not have the luxury of annotated data every task or domain. Reducing dependence on labeled examples may require us to rethink how we supervise models. In this talk, I will describe recent work where we use knowledge to inform neural networks without introducing additional parameters. Declarative rules in logic can be systematically compiled into computation graphs that augment the structure of neural models, and also into regularizers that can use labeled or unlabeled examples. I will present experiments on text understanding tasks, which show that such declaratively constrained neural networks are not only more accurate, but also more consistent in their predictions across examples. This is joint work with my students Tao Li, Vivek Gupta and Maitrey Mehta.

Bio: Vivek Srikumar is an assistant professor in the School of Computing at the University of Utah. His research lies in the areas of natural learning processing and machine learning and has primarily been driven by questions arising from the need to reason about textual data with limited explicit supervision and to scale NLP to large problems. His work has been published in various AI, NLP and machine learning venues and received the best paper award at EMNLP 2014. His work has been supported by NSF and BSF, and gifts from Google, Nvidia and Intel. He obtained his Ph.D. from the University of Illinois at Urbana-Champaign in 2013 and was a post-doctoral scholar at Stanford University.

Speaker: 
Vivek Srikumar
Time/Date: 
Wednesday, October 9, 2019 - 12:00
Location: 
CSE 305
Tags: