See a listing of previous talks.

Alex Takakuwa, General Examination

Title: Moving from passwords to authenticators

Advisor: Yoshi Kohno

Supervisory Committee: Yoshi Kohno (Chair), Edward Mack (GSR, Asian Languages & Literature), Alexei Czeskis (Google), and Franzi Roesner

Abstract: 

Humans have used passwords for access control since ancient times.  Upon the advent of the internet, passwords naturally transitioned to the web and have since become the standard mode of web authentication. However, over the last 25 years, certain issues with password authentication have proven to be unavoidable security and usability problems. Many within the computer security industry believe that we can improve the state of the art in both security and usability by utilizing asymmetric challenge-response protocols for authentication. For example, the FIDO Alliance, a group of industry and academic partners working together to bring secure and usable authentication protocols to the web, utilize such asymmetric cryptographic protocols to help strengthen the authentication flow. However, despite industry and academic desire to improve web authentication, passwords remain the status quo for users. In this dissertation proposal, I present the landscape of authentication protocols and attempt to solve some of the remaining technical challenges that prevent modern authentication schemes from supplanting passwords as the dominant method of web authentication.

When: 1 Dec 2017 - 2:00pm Where: CSE 203

Connor Schenck, General Examination

Title: Deep Liquid: Combining Liquid Simulation and Perception Using Deep Neural Networks

Advisor: Dieter Fox

Supervisory Committee: Dieter Fox (Chair), Samuel Burden (GSR, EE), Maya Cakmak, and Sidd Srinivasa

Abstract: Liquids are an essential part of many common tasks. While humans seem to be able to master them to varying degrees from a young age, this is still a challenge for robots. Here I propose a system to enable robots to handle liquids. I propose a Bayes' filter for liquids that combines deep neural networks with liquid simulation. The deep networks provide the dynamics and observation models of the filter while the simulator provides the training data. This combines both the generality of the simulator with the adaptability of the deep networks. The proposed method has the potential to make a significant contribution to the body of robotics research.

When: 4 Dec 2017 - 12:00pm Where: CSE 403

Ignacio Cano, General Examination

Title: Optimizing Systems using Machine Learning

Advisor: Arvind Krishnamurthy

Supervisory Committee: Arvind Krishnamurthy (Chair), Radha Poovendran (GSR, EE), Xi Wang, and Kevin Jamieson 

Abstract: Computer systems comprise of many components that interact and cooperate with each other to perform certain task(s). Traditionally, many of these systems base their decisions on sets of rules or configurations defined by operators as well as handcrafted analytical models. However, creating those rules or engineering such models is a challenging task. First, the same system should be able to work under a combinatorial number of constraints on top of heterogeneous hardware. Second, they should support different type of workloads and run in potentially widely different settings. Third, they should be able to handle time-varying resource needs. These factors render reasoning about systems performance in general far from trivial.

In this thesis, we propose optimizing systems using Machine Learning techniques. By doing so, we aim to offload the burden of manually tuning rules and handcrafting complex analytical models from system designers, in order to bridge the gap of systems performance, and promote a generation of smarter systems that can learn from past experiences and improve their performance over time. In this talk, we present two systems that illustrate the impact of these ML-based optimizations.

First, we introduce ADARES, an adaptive system that dynamically adjusts virtual machine resources on-the-fly, namely virtual CPUs and memory, based on workload characteristics and other attributes of the virtualized environment, using Reinforcement Learning techniques. Then, we present CURATOR, a MapReduce-based framework for storage systems that safeguards the storage health and performance by executing background maintenance tasks, which we also schedule using RL.

Throughout this thesis, we present the instantiation of different ML models and (empirically) show how our formulations result in improved systems performance and efficiency. We propose pre-initializing model-free learners with historical traces to accelerate training, thus reducing the sample complexity. Our models can cope with heterogeneity in workloads, settings, and resources, as well as adapt to non-stationary dynamics.

When: 4 Dec 2017 - 3:00pm Where: CSE 303

Kenton Lee, Final Examination

Title: Span-based Neural Structured Prediction

Advisor: Luke Zettlemoyer

Supervisory Committee: Luke Zettlemoyer (Chair), Mari Ostendorf (GSR, EE), Yejin Choi, and Noah Smith

Abstract: 

A long-standing goal in artificial intelligence is for machines to understand natural language. With ever-growing amounts of data in the world, it is crucial to automate many aspects of language understanding so that users can make sense of this data in the face of information overload. The main challenge stems from the fact that language, either in the form of speech or text, is inherently unstructured. Without programmatic access to the semantics of natural language, it is challenging to build general, robust systems that are usable in practice.

Towards achieving this goal, we propose a series of neural structured-prediction algorithms for natural language processing. In particular, we address a challenge common to all such algorithms: the space of possible output structures can be extremely large, and inference in this space can be intractable. Despite the seeming incompatibility of neural representations with dynamic programs from traditional structured prediction algorithms, we can leverage these rich representations to learn more accurate models while using simpler or lazier inference.

We focus on inference algorithms over the most basic substructure of language: spans of text. We present state-of-the-art models for tasks that require modeling the internal structure of spans, such as syntactic parsing, and modeling structure between spans, such as question-answering and coreference resolution. The proposed techniques are applicable to many structured prediction problems and we expect that they will further push the limits of neural structured prediction for natural language processing.

When: 6 Dec 2017 - 11:30am Where: CSE 403

Abram Friesen, Final Examination

Title: The Sum-Product Theorem and its Applications

Advisor: Pedro Domingos

Supervisory Committee: Pedro Domingos (Chair), Jeffrey Bilmes (GSR, EE), Carlos Guestrin, and Henry Kautz (University of Rochester)

Abstract:

Models in artificial intelligence (AI) and machine learning (ML) must be expressive enough to accurately capture the state of the world, but tractable enough that reasoning and inference within them is feasible. However, many standard models are incapable of capturing sufficiently complex phenomena when constrained to be tractable. In this work, I study the cause of this inexpressiveness and its relationship to inference complexity. I use the resulting insights to develop more efficient and expressive models and algorithms for many problems in AI and ML, including nonconvex optimization, computer vision, and deep learning. 


When: 12 Dec 2017 - 2:00pm Where: CSE 305

John C. Earls, General Examination

Title: TBA

Advisors: Larry Ruzzo and Nathan Price

Supervisory Committee: Larry Ruzzo (Co-Chair), Nathan Price (Co-Chair, BioE), Daniela Witten (GSR, STAT), and Ed Lazowska

Abstract: TBA

When: 26 Feb 2018 - 9:00am Where: CSE 303