Much recent research on semantic parsing has focused on learning to map natural-language sentences to graphs which represent the meaning of the sentence, such as Abstract Meaning Representations (AMRs) and MRS graphs. In this talk, I will discuss methods for semantic parsing into graphs which aim to make the compositional structure of the semantic representations explicit. This connects semantic parsing to a fundamental principle of linguistic semantics and should improve generalization to unseen data, improving accuracy.

I will first introduce two graph algebras - the HR algebra from the theory literature and our own apply-modify (AM) algebra -, and show how to define symbolic grammars that map between strings and graphs using these algebras. Compared to the HR algebra, the AM algebra drastically reduces the number of possible compositional structures for a given graph, but it still permits linguistically plausible analyses for a variety of nontrivial semantic phenomena.

I will then report on a neural semantic parser which learns to map sentences into terms over the AM algebra. This semantic parser combines a neural supertagger (which predicts elementary graphs for each word in the sentence) with a neural dependency parser (which predicts the structure of the AM terms). By constraining the search to AM terms which also satisfy certain simple type constraints, we achieve state-of-the-art (pre-ACL) accuracy in AMR parsing. One advantage of the model is that it generalizes neatly to other semantic parsing problems, such as semantic parsing into MRS or DRT.

 

 Bio

Alexander Koller is a Professor of Computational Linguistics in the Department of Language Science and Technology and Saarland University.  He also holds a joint appointment with Facebook AI Research.

Speaker: 
Alexander Koller
Time/Date: 
Saturday, July 21, 2018 - 06:40
Location: 
CSE 691 (Gates Commons)
Tags: