Thursday, December 13, 2018 - 10:30
Thursday, November 29, 2018 - 10:30
Tuesday, November 13, 2018 - 10:30
Tuesday, October 16, 2018 - 10:30
A review of recent advances in AI, and why, despite genuine progress, we may not be on the right track towards general AI, followed by some very tentative discussion about what might be better.
Gary Marcus, scientist, bestselling author, and entrepreneur was CEO and Founder of the machine learning startup Geometric Intelligence, recently acquired by Uber.
As a Professor of Psychology and Neural Science at NYU, he has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, and artificial intelligence, often in leading journals such as Science and Nature.
Tuesday, October 9, 2018 - 10:30
The big data boom in recent years covers a wide spectrum of heterogeneous data types, from text to image, video, speech, and multimedia. Most of the valuable information in such "big data" is encoded in natural language, which makes it accessible to some people—for example, those who can read that particular language—but much less amenable to computer processing beyond a simple keyword search.
My focused research area, cross-source Information Extraction (IE) on a massive scale, aims to create the next generation of information access in which humans can communicate with computers in any natural language beyond keyword search, and computers can discover accurate, concise, and trustable information embedded in big data from heterogeneous sources.
The goal of Information Extraction (IE) is to extract structured facts from a wide spectrum of heterogeneous unstructured data types. Traditional IE techniques are limited to a certain source X (X = a particular language, domain, limited number of pre-defined fact types, single data modality, ...). When moving from X to a new source Y, we need to start from scratch again by annotating a substantial amount of training data and developing Y-specific extraction capabilities.
In this talk, I will present a new Universal IE paradigm to combine the merits of traditional IE (high quality and fine granularity) and Open IE (high scalability). This framework is able to discover schemas and extract facts from any input data in any domain, without any annotated training data, by integrating distributional semantics and symbolic semantics. It can also be extended to thousands of languages, thousands of fact types and multiple data modalities (text, images, videos) by constructing a multi-lingual multi-media multi-task common semantic space and then performing zero-shot transfer learning across sources. The resulting system was selected for DARPA 60 Anniversary.
Heng Ji is the Edward P. Hamilton Chair Professor in Computer Science at Rensselaer Polytechnic Institute. She received her Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Information Extraction and Knowledge Base Population. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016, 2017 and 2018. She received "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, Google Research Awards in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014, and Bosch Research Awards in 2015, 2016 and 2017. She coordinated the NIST TAC Knowledge Base Population task since 2010, and led various government sponsored research projects, including the DARPA DEFT TinkerBell team and the ARL NS-CTA Knowledge Networks Construction task. She has served as a panelist for US Air Force 2030, and the Program Committee Co-Chair of several conferences including NAACL-HLT2018.
Saturday, July 21, 2018 - 06:40
Much recent research on semantic parsing has focused on learning to map natural-language sentences to graphs which represent the meaning of the sentence, such as Abstract Meaning Representations (AMRs) and MRS graphs. In this talk, I will discuss methods for semantic parsing into graphs which aim to make the compositional structure of the semantic representations explicit. This connects semantic parsing to a fundamental principle of linguistic semantics and should improve generalization to unseen data, improving accuracy.
I will first introduce two graph algebras - the HR algebra from the theory literature and our own apply-modify (AM) algebra -, and show how to define symbolic grammars that map between strings and graphs using these algebras. Compared to the HR algebra, the AM algebra drastically reduces the number of possible compositional structures for a given graph, but it still permits linguistically plausible analyses for a variety of nontrivial semantic phenomena.
I will then report on a neural semantic parser which learns to map sentences into terms over the AM algebra. This semantic parser combines a neural supertagger (which predicts elementary graphs for each word in the sentence) with a neural dependency parser (which predicts the structure of the AM terms). By constraining the search to AM terms which also satisfy certain simple type constraints, we achieve state-of-the-art (pre-ACL) accuracy in AMR parsing. One advantage of the model is that it generalizes neatly to other semantic parsing problems, such as semantic parsing into MRS or DRT.
Alexander Koller is a Professor of Computational Linguistics in the Department of Language Science and Technology and Saarland University. He also holds a joint appointment with Facebook AI Research.