Machine learning is one of the most promising areas within computer science and AI that has the potential to address many of society’s challenges. It is important, however, to develop machine learning constructs that are simple to define, mathematically rich, naturally suited to real-world applications, and scalable to large problem instances. Convexity and graphical models are two such broad frameworks that are highly successful, but there are still many problem areas for which neither is suitable. This talk will discuss submodularity, a third such framework that is becoming more popular. Despite having been a key concept in economics, discrete mathematics, and optimization for over 100 years, submodularity is a relatively recent phenomenon in machine learning and AI. We are now seeing a surprisingly diverse set of real-world problems to which submodularity is applicable. In this talk, we will cover some of the more prominent examples, drawing often from the speaker's own work. This includes applications in dynamic graphical models, clustering, summarization, computer vision, natural language processing (NLP), and parallel computing. We will see how submodularity leads to efficient and scalable algorithms while simultaneously guaranteeing high-quality solutions; in addition, we will demonstrate how these concrete applications have advanced and contributed to the purely mathematical study of sub modularity.

Bio:

Jeffrey A. Bilmes is a professor at the Department of Electrical Engineering at the University of Washington, Seattle and an adjunct professor in the Department of Computer Science and Engineering and in the Department of Linguistics. He received his Ph.D. in computer science from the University of California in Berkeley. He is a 2001 NSF Career award winner, a 2002 CRA Digital Government Fellow, a 2008 NAE Gilbreth Lectureship award recipient, a 2012/2013 ISCA Distinguished Lecturer, a 2013 best paper award winner from both Neural Information Processing Systems (NIPS) and International Conference on Machine Learning (ICML), and a 2014 "best in 25 years" retrospective paper award winner from the International Conference on Supercomputing (ICS). His primary interests lie in machine learning, including dynamic graphical models, discrete and submodular optimization, speech recognition, natural language processing, bioinformatics, active and semi-supervised learning, computer vision, and audio/music processing. Prof. Bilmes is the principle designer (and implementer) of the graphical models toolkit (GMTK), a widely used software system for general dynamic graphical models and time series modeling. He has been working on submodularity in machine learning since 2003.

Speaker: 
Jeff Bilmes
Time/Date: 
Wednesday, February 17, 2016 - 16:30
Location: 
EEB 045
Tags: