Abstract: Language is one of the greatest puzzles of both human and artificial intelligence (AI). Children learn and understand their language effortlessly; yet, we do not fully understand how they do so. Moreover, although access to more data and computation has resulted in recent advances in AI systems, they are still far from human performance in many language tasks. In my research, I try to address two broad questions: how do humans learn, represent, and understand language? And how can this inform AI? In the first part of my talk, I show how computational modeling can help us understand the mechanisms underlying child word learning. I introduce an unsupervised model that learns word meanings using general cognitive mechanisms; this model processes data that approximates child input and assumes no built-in linguistic knowledge. Next, I explain how cognitive science of language can help us examine current AI models and develop improved ones. In particular, I focus on how investigating human semantic processing helps us model semantic representations more accurately. Finally, I explain how we can use experiments in theory-of-mind to examine question-answering models with respect to reasoning capacity about beliefs.

Bio: Aida Nematzadeh is a research scientist at DeepMind. Previously she was a postdoctoral researcher at UC Berkeley affiliated with the Computational Cognitive Science Lab and BAIR. She received a PhD and an MSc in Computer Science from the University of Toronto. Her research interests lie in the intersection of cognitive science, computational linguistics, and machine learning.

Speaker: 
Aida Nematzadeh
Time/Date: 
Thursday, December 13, 2018 - 10:30
Location: 
CSE 305
Tags: