Artificial Intelligence Talks

To receive notifications about the upcoming AI talks join the uw-ai mailing list here

Wednesday, March 2, 2016 - 16:30

Speaker: Ashish Sabharwal (AI2)
Location: EEB 045
Artificial intelligence and machine learning communities have made tremendous strides in the last decade. Yet, the best systems to date still struggle with routine tests of human intelligence, such as standardized science exams posed as-is in natural language, even at the elementary-school level. Can we demonstrate human-like intelligence by building systems that can pass such tests? Unlike typical factoid-style question answering (QA) tasks, these tests challenge a student’s ability to combine multiple facts in various ways, and appeal to broad common-sense and science knowledge. Going beyond arguably shallow information retrieval (IR) and statistical correlation techniques, we view science QA from the lens of combinatorial optimization over a semi-formal knowledge base derived from text. Our structured inference system, formulated as an Integer Linear Program (ILP), turns out to be not only highly complementary to IR methods, but also more robust to question perturbation, as well as substantially more scalable and accurate than prior attempts using probabilistic first-order logic and Markov Logic Networks (MLNs). This talk will discuss fundamental challenges behind the science QA task, the progress we have made, and many challenges that lie ahead.

As a research scientist at the Allen Institute for AI (AI2), Ashish Sabharwal investigates scalable and robust methods for probabilistic and combinatorial inference, graphical models, and discrete optimization, especially as they apply to assessing machine intelligence through standardized exams in science and math. Prior to joining AI2, Ashish spent over three years at IBM Watson and five years at Cornell University, after obtaining his Ph.D. from the University of Washington in 2005. Ashish has co-authored over 70 publications, been part of winning teams in international reasoning competitions, and received five best paper awards and runner-up prizes at venues such as AAAI, IJCAI, and UAI.

Wednesday, February 24, 2016 - 16:30

Speaker: Daniel Sheldon (University of Massachusetts Amherst)
Location: EEB 045

Ecological processes such as bird migration are complex, difficult to measure, and occur at the scale of continents, making it impossible for humans to grasp their broad-scale patterns by direct observation. Yet we urgently need to improve scientific understanding and design conservation practices help protect Earth's ecosystems from threats such as climate change and human development. Fortunately, novel data sources---such as large sensor networks and millions of bird observations reported by human "citizen scientists"---provide new opportunities to understand ecological phenomena at very large scales. The ability to fit models, test hypotheses, make predictions, and reason about human impacts on biological processes at this scale has potential to revolutionze ecology and conservation.

In this talk, I will present work from two broad algorithmic frameworks designed to overcome challenges in model fitting and decision-making in large-scale ecological science. Collective graphical models permit very efficient reasoning about probabilistic models of large populations when only aggregate data is available; they apply to learn about bird migration from citizen-science data and also to learn about human mobility from data that is aggregated for privacy. Stochastic network design is a framework for designing robust networks and optimizing cascading behavior in networks; it applies to spatial conservation planning, optimizing dam removal in river networks, and increasing the resilience of road networks to natural disasters.


Daniel Sheldon is an Assistant Professor of Computer Science at the University of Massachusetts Amherst and Mount Holyoke College. He received his Ph.D. from the Department of Computer Science at Cornell University in 2009, and was an NSF Postdoctoral Fellow in Bioinformatics at the School of EECS at Oregon State University from 2010-2012. His research interests are in machine learning, probabilistic modeling, and optimization applied to large-scale problems in ecology, computational sustainability, and networks. His work was recognized by a Computational Sustainability Best Paper Award at AAAI 2016, and is supported by the NSF and MassDOT.

Wednesday, February 17, 2016 - 16:30

Speaker: Jeff Bilmes
Location: EEB 045

Machine learning is one of the most promising areas within computer science and AI that has the potential to address many of society’s challenges. It is important, however, to develop machine learning constructs that are simple to define, mathematically rich, naturally suited to real-world applications, and scalable to large problem instances. Convexity and graphical models are two such broad frameworks that are highly successful, but there are still many problem areas for which neither is suitable. This talk will discuss submodularity, a third such framework that is becoming more popular. Despite having been a key concept in economics, discrete mathematics, and optimization for over 100 years, submodularity is a relatively recent phenomenon in machine learning and AI. We are now seeing a surprisingly diverse set of real-world problems to which submodularity is applicable. In this talk, we will cover some of the more prominent examples, drawing often from the speaker's own work. This includes applications in dynamic graphical models, clustering, summarization, computer vision, natural language processing (NLP), and parallel computing. We will see how submodularity leads to efficient and scalable algorithms while simultaneously guaranteeing high-quality solutions; in addition, we will demonstrate how these concrete applications have advanced and contributed to the purely mathematical study of sub modularity.


Jeffrey A. Bilmes is a professor at the Department of Electrical Engineering at the University of Washington, Seattle and an adjunct professor in the Department of Computer Science and Engineering and in the Department of Linguistics. He received his Ph.D. in computer science from the University of California in Berkeley. He is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, a 2012/2013 ISCA Distinguished Lecturer, a 2013 best paper award winner from both Neural Information Processing Systems (NIPS) and International Conference on Machine Learning (ICML), and a 2014 "best in 25 years" retrospective paper award winner from the International Conference on Supercomputing (ICS). His primary interests lie in machine learning, including dynamic graphical models, discrete and submodular optimization, speech recognition, natural language processing, bioinformatics, active and semi-supervised learning, computer vision, and audio/music processing. Prof. Bilmes is the principle designer (and implementer) of the graphical models toolkit (GMTK), a widely used software system for general dynamic graphical models and time series modeling. He has been working on submodularity in machine learning since 2003.

Wednesday, February 10, 2016 - 16:30

Speaker: Sumit Gulwani (MSR)
Location: EEB 045
99% of computer end users do not know programming, and struggle with repetitive tasks. Programming by Examples (PBE) can revolutionize this landscape by enabling users to synthesize intended programs from example based specifications.
A key technical challenge in PBE is to search for programs that are consistent with the examples provided by the user. Our efficient search methodology is based on two key ideas: (i) Restriction of the search space to an appropriate domain-specific language that offers balanced expressivity and readability. (ii) A divide-and-conquer based deductive search paradigm that inductively reduces the problem of synthesizing a program of a certain kind that satisfies a given specification into sub-problems that refer to sub-programs or sub-specifications.
Another challenge in PBE is to resolve the ambiguity in the example based specification. We will discuss two complementary approaches: (a) machine learning based ranking techniques that can pick an intended program from among those that satisfy the specification, and (b) active-learning based user interaction models.
The above concepts will be illustrated using FlashFill, FlashExtract, and FlashRelate---PBE technologies for data manipulation domains. These technologies, which have been released inside various Microsoft products, are useful for data scientists who spend 80% of their time wrangling with data. The Microsoft PROSE SDK allows easy construction of such technologies.