Monday, February 5, 2018 - 10:30
Friday, January 19, 2018 - 10:30
Monday, January 8, 2018 - 10:30
Monday, December 4, 2017 - 10:30
Title: Structured Prediction for Semantic Parsing
Abstract: Mapping unstructured text to structured meaning representations, semantic parsing covers a wide variety of problems in the domain of natural language understanding and interaction. Common applications include translating human commands to executable programs, as well as question answering when using databases or semi-structured tables as the information source. Settings of real-world semantic parsing problems, such as the large space of legitimate semantic parses, weak or mixed supervision signals, and complex semantic/syntactic constraints, pose interesting and yet difficult structured prediction challenges. In this talk, I will give an overview on these technical challenges and present a case study onsequential question answering, answering sequences of simple but inter-related questions using semi-structured tables from Wikipedia. In particular, I will describe our dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search, which effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions. The talk will be concluded with discussion on the open problems and promising directions for future research.
Bio: Scott Wen-tau Yih is a Principal Research Scientist at Allen Institute for Artificial Intelligence (AI2). His research interests include natural language processing, machine learning and information retrieval. Yih received his Ph.D. in computer science at the University of Illinois at Urbana-Champaign. His work on joint inference using integer linear programming (ILP) has been widely adopted in the NLP community for numerous structured prediction problems. Prior to joining AI2, Yih has spent 12 years at Microsoft Research, working on a variety of projects including email spam filtering, keyword extraction and search & ad relevance. His recent work focuses on continuous representations and neural network models, with applications in knowledge base embedding, semantic parsing and question answering. Yih received the best paper award from CoNLL-2011, an outstanding paper award from ACL-2015 and has served as area co-chairs (HLT-NAACL-12, ACL-14, EMNLP-16,17), program co-chairs (CEAS-09, CoNLL-14) and action/associated editors (TACL, JAIR) in recent years. He is also a co-presenter for several tutorials on topics including Semantic Role Labeling (NAACL-HLT-06, AAAI-07), Deep Learning for NLP (SLT-14, NAACL-HLT-15, IJCAI-16), NLP for Precision Medicine (ACL-17).
Monday, November 20, 2017 - 10:30
Friday, November 3, 2017 - 10:30
I will summarize an approach to neural network design that enables symbolic structures to be encoded as distributed neural vectors and allows symbolic computations of interest to AI & computational linguistics to be carried out through massively parallel neural operations. The approach leads to new grammar formalisms that are rooted in neural computation; these have had significant impact within linguistic theory (especially phonology). I will present theoretical results as well as recent experimental results with deep learning applying the method to image caption generation and to NLP for question answering. In each case, the model learns aspects of syntax in the service of its NLP task.
Paul Smolensky is a Partner Researcher in the Deep Learning group of the Microsoft Research AI lab in Redmond WA as well as Krieger-Eisenhower Professor of Cognitive Science at Johns Hopkins University in Baltimore MD. His research addresses the unification of symbolic and neural computation with a focus on the theory of grammar. This work led to Optimality Theory, which he co-created with Alan Prince (1993); this is an outgrowth of Harmonic Grammar, which he co-developed with Géraldine Legendre & Yoshiro Miyata (1990). He received the 2005 D. E. Rumelhart Prize for Outstanding Contributions to the Formal Analysis of Human Cognition.
Friday, September 29, 2017 - 10:30
Bio: Margaret Mitchell is a Senior Research Scientist in Google's Research & Machine Intelligence group, working on artificial intelligence. Her research generally involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. This includes research on helping computers to communicate based on what they can process, as well as projects to create assistive and clinical technology from the state of the art in AI.