Robbie Weber, Qualifying Project Presentation

Title: A Simply Exponential Upper Bound on the Number of Stable Matchings

Advisors: Anna Karlin and Shayan Oveis Gharan

Abstract: For a stable matching instance with n men and n women let f(n) denote the maximum number of stable matchings. It has been a long-standing open problem to understand the asymptotic behavior of f(n) as n goes to infinity. The problem was first posed by Knuth in 1970s. Until now the best lower bound was approximately 2.28^n, and the best upper bound was n!/(c')^n, for some constant c'. Here, we show that for any n, f(n)  c^n for some universal constant c>1. This matches the lower bound up to the base of the exponent. Our proof is based on a reduction to counting the number of downsets of a family of posets that we call mixing. The latter could be of independent interest.

When: 22 Sep 2017 - 12:00pm Where: CSE 303

Aditya Sankar, Final Examination

Title: Interactive In-Situ Scene Capture on Mobile Devices

Advisor: Steve Seitz

Supervisory Committee: Steve Seitz (Chair), Alex Anderson (GSR, Architecture), Brian Curless, and Michael Cohen

Abstract: Architectural visualizations of indoor scenes enable compelling applications in several areas, such as real estate, interior design, cultural heritage preservation, and more recently, immersive Virtual Reality. Computer-Aided Design (CAD) tools have been invaluable for creating such visualizations. However, creating detailed, attractive visualizations of scenes remains a challenging task, particularly for non-experts. User interfaces for CAD tools tend to be complex and require significant manual effort to operate. These tools are also designed to be used ex-situ, or off-site, making it difficult to record and reproduce details faithfully.

In this thesis, I propose novel techniques and systems for interactive in-situ scene capture on mobile devices, that lets non-expert users quickly and easily capture useful architectural visualizations of indoor scenes. These systems are built upon two key insights; 1) sensors on mobile devices can be leveraged to capture important aspects of the scene such as dimensions, room shape, furniture placement, etc., and 2) an in-situ user can assist in the modeling task by acting as a guide for reconstruction and object recognition algorithms. Based on these insights, the semi-automatic systems that I propose combine the strengths of the user, who is good at high-level semantic reasoning, and the computer, which excels at combinatorics and numerical optimization.

In this talk, I will present the design, implementation and evaluation of three systems for in-situ indoor scene capture on mobile devices.

When: 7 Sep 2017 - 11:30am Where: CSE 305

Irene Zhang, Final Examination

Title: Distributed Operating Systems for Mobile/Cloud Applications

Advisor: Hank Levy

Supervisory Committee: Hank Levy (Chair), Scott Hauck (GSR, EE), Arvind Krishnamurthy, and Luis Ceze

Abstract: The convergence of ubiquitous mobile devices, large-scale cloud platforms and pervasive network connectivity have changed the face of modern user applications. Unlike a traditional desktop application of the past, which runs on a single machine and supports a single user, the typical user-facing application today spans numerous mobile devices and cloud servers while supporting large numbers of users. This shift has significantly increased the difficulty of building user applications due to the challenges of the mobile/cloud environment (e.g., partial failures, network partitions) and the new requirements of mobile/cloud applications (e.g., availability, scalability).  At the same time, existing systems do little to help application programmers cope with these challenges and requirements.

This thesis proposes a new type of mobile/cloud operating system designed to meet the needs of modern applications. Mobile/cloud applications are the standard application of the future; thus, they deserve a first-class operating system to simplify their development and management. This thesis includes three systems that make up the world's first mobile/cloud operating system: (1) Sapphire, a new distributed runtime and process management system, (2) Diamond, a new distributed memory management system, and (3) TAPIR, a new distributed storage system. Together, these systems introduce several new operating systems abstractions and mechanisms designed to eliminate the challenges and simplify the requirements of mobile/cloud applications. This thesis demonstrates that, like operating systems of the past, these systems make it easier for application programmers to build larger and more complex applications.

When: 1 Sep 2017 - 10:00am Where: CSE 303

Hamid Izadinia, General Examination

Title: Semantic 3D Scene Understanding for Virtual and Augmented Reality

Advisor: Steve Seitz

Supervisory Committee: Steve Seitz (Chair), Daniela Rosner (GSR, HCDE), Brian Curless, Alexei Efros, and Sergey Levine

Abstract: Semantic 3D scene understanding is essential in applications that incorporate interaction with the 3D world such as Virtual and Augmented Reality (VR/AR). In my talk, I will explore several prior approaches on semantic 3D scene understanding for VR/AR applications and propose a 3D scene CAD model reconstruction from single image. Given a single photo of a room and a large database of object CAD models, our goal is to reconstruct a scene that is as similar as possible to the scene depicted in the image, and composed of objects drawn from 3D shape database. My proposed work is a completely automatic system to address this IM2CAD problem that produces high quality results on challenging indoor scenes by iteratively optimizing the placement and scale of objects to best match scene renderings in simulation to the input image, using image comparison metrics trained via deep convolutional neural nets.  We also show the applicability of our method in standard scene understanding benchmarks where we obtain significant improvement.

When: 15 Aug 2017 - 2:30pm Where: CSE 303

Raymond Cheng, Final Examination

Title: Practical Improvements to User Privacy in Cloud Applications

Advisors: Tom Anderson and Arvind Krishnamurthy

Supervisory Committee: Tom Anderson (Co-Chair), Arvind Krishnamurthy (Co-Chair), Raadhakrishnan Poovendran (GSR, EE), Yoshi Kohno, and Jon Howell (MSR)

Abstract: As the cloud handles more user data, users need better techniques to protect their privacy from adversaries looking to gain unauthorized access to sensitive data. Today’s cloud services offer weak assurances with respect to user privacy, as most data is processed unencrypted in a centralized location by systems with a large trusted computing base. While current architectures enable application development speed, this comes at the cost of susceptibility to large-scale data breaches.

In this thesis, I argue that we can make significant improvements to user privacy from both external attackers and insider threats. In the first part of the thesis, I develop the Radiatus architecture for securing fully-featured cloud applications from external attacks. Radiatus secures private data stored by web applications by isolating server-side code execution into per-user sandboxes, limiting the scope of successful attacks. In the second part of the thesis, I focus on a simpler messaging application, Talek, securing it from both external and insider threats. Talek is a group private messaging system that hides both message contents as well as communication patterns from an adversary in partial control of the cloud.

Both of these systems are designed to provide better security and privacy guarantees for users under realistic threat models, while offering practical performance and development costs. This thesis presents an implementation and evaluation of both systems, showing that improved user privacy can come at acceptable costs.

When: 10 Aug 2017 - 2:00pm Where: CSE 305

Sam Sudar, Final Examination

Title: Leveraging the Mobile Web in Resource-Constrained Environments

Advisor: Richard Anderson

Supervisory Committee: Richard Anderson (Chair), Ryan Calo (GSR, School of Law), Franzi Roesner, and Matt Welsh (Google)

Abstract: Mobile phones and the internet are powerful technologies affecting many areas of society. Most advances in these areas disproportionately benefit the wealthy and well-connected. My work has explored ways that mobile technology and the web can be used in resource-constrained settings, and specifically in the developing world. This talk will describe a framework for simplifying mobile workflows and for sharing web content on local networks.

When: 10 Aug 2017 - 10:00am Where: CSE 303

Mark Yatskar, Final Examination

Title: Natural Language as a Scaffold for Visual Recognition

Advisor: Luke Zettlemoyer

Supervisory Committee: Luke Zettlemoyer (Chair), Mari Ostendorf (GSR, EE), Yejin Choi, and Ali Farhadi

Abstract: A goal of artificial intelligence is to create a system that can perceive and understand the visual world through images. Central to this goal is defining what exactly should be recognized, both in structure and coverage. Numerous competencies have been proposed, ranging from low level tasks such as edge detection to high level tasks, such as semantic segmentation. In each case, a specific set of visual targets is considered (e.g. particular objects or activities to be recognized) and it can be difficult to define a comprehensive set of everything that could be present in the images. In contrast to these efforts, we consider taking a broader view of visual recognition. We propose to use natural language as a guide for what people can perceive about the world from images and what ultimately machines should emulate. 

We show it is possible to use unrestricted words and large natural language processing ontologies to define relatively complete sets of targets for visual recognition. We explore several core questions centered around this theme: (a) what kind of language can be used, (b) what it means to label everything and (c) can structure in language be used to define a recognition problem. We make progress in several directions, showing for example that highly ambiguous sentimental language can be used to formulate concrete targets and that linguistics feature norms can be used to densely annotate many complex aspects of images. 
Finally, this thesis introduces situation recognition, a novel representation of events in images that relies on two natural language processing resources to achieve scale and expressivity. The formalism combines WordNet, an ontology of nouns, with FrameNet, an ontology of verbs and implicit argument types, and is supported by a newly collected large scale image resource imSitu. Situation recognition significantly improves over existing formulations for activities in images, allowing for higher coverage, increased richness of the representation, and more accurate models. We also identify new challenges with our proposal, such as rarely observed target outputs, and develop methods for addressing them.
When: 10 Jul 2017 - 11:00am Where: CSE 203

Travis Mandel, Final Examination

Title: Better Education Through Improved Reinforcement Learning

Advisors: Zoran Popović and Emma Brunskill (Stanford)

Supervisory Committee: Zoran Popović (co-Chair), Emma Brunskill (co-Chair, Stanford), Mari Ostendorf (GSR, EE), and Dan Weld

Abstract: When a new student comes to play an educational game, how can we determine what content to give them such that they learn as much as possible?  When a frustrated customer calls in to a helpline, how can we determine what to say to best assist them?  When a ill patient comes in to the clinic, how do we determine what tests to run and treatments to give to maximize their quality of life? 

These problems, though diverse, are all a seemingly natural choice for reinforcement learning, where an AI agent learns from experience how to make a sequence of decisions to maximize some reward signal.  However, unlike many recent successes of reinforcement learning, in these settings the agent gains experience solely by interacting with humans (e.g. game players or patients). As a result, although the potential to directly impact human lives is much greater, intervening to collect new data is often expensive and potentially risky .  Therefore, in this talk I present methods that allow us to evaluate candidate learning approaches offline using previously-collected data instead of actually deploying them. Further, I present new learning algorithms which ensure that, when we do choose to deploy, the data we gather is maximally useful. Finally, I explore how reinforcement learning agents should best leverage human expertise to gradually extend the capabilities of the system, a topic which lies in the exciting area of Human-in-the-Loop AI. 

Throughout the talk I will discuss how I have deployed real-world experiments and used data from thousands of kids to demonstrate the effectiveness of our techniques on several educational games.

When: 5 Jul 2017 - 12:00pm Where: CSE 403

Supasorn Suwajanakorn, Final Examination

Title: Audiovisual Persona Reconstruction

Advisors: Steven Seitz and Irena Kemelmaher

Supervisor Committee: Steven Seitz (co-Chair), Irena Kemelmaher (co-Chair), Duane Storti (GSR, ME), and Richard Szeliski (Facebook)

Abstract: How can Tom Hanks come across so differently in "Forrest Gump" and "Catch Me If You Can?" What makes him unique in each of his roles? Is it his appearance? The way he talks? The way he moves? In my thesis, I introduce the problem of persona reconstruction. I define it as a modeling process that accurately represents the likeness of a person, and propose solutions to address the problem with the goal of creating a model that looks, talks, and acts like the recorded person. The specific aspects of persona modelled in this thesis include facial shape and appearance, motion and expression dynamics, the aging process, the speaking style and how a person talks through solving the visual speech synthesis problem.

These goals are highly ambitious. The key idea of this thesis is that the utilization of a large amount of unconstrained data enables overcoming many of the challenges. Unlike most traditional modeling techniques which require a sophisticated capturing process, my solutions to these tasks operate only on unconstrained data such as a uncalibrated personal photo and video collection, and thus can be scaled to virtually anyone, even historical figures, with minimal efforts.
In particular, I first propose new techniques to reconstruct time-varying facial geometry equipped with expression-dependent texture that captures even minute shape variations such as wrinkles and creases using a combination of uncalibrated photometric stereo, novel 3D optical flow, dense pixel-level face alignment, and frequency-based image blending. Then I demonstrate a way to drive or animate the reconstructed model with a source video of another actor by transferring the expression dynamics while preserving the likeness of the person. Together these techniques represent the first system that allows reconstruction of a controllable 3D model of any person from just a photo collection. 
Next, I model facial changes due to aging by learning the aging transformation from unstructured Internet photos using a novel illumination-subspace matching technique. Then I apply such a transformation in an application that takes as input a photograph of a child and produces a series of age-progressed outputs between 1 and 80 years of age. The proposed technique establishes a new state of the art for the most difficult aging case of babies to adults. This is demonstrated by an extensive evaluation of age progression techniques in the literature.
Finally, I model how a person talks via a system that can synthesize a realistic video of a person speaking given just an input audio. Unlike prior work which requires a carefully constructed speech database from many individuals, my solution solves the video speech problem by requiring only existing video footage of a single person. Specifically, it focuses on a single person (Barack Obama) and relies on an LSTM-based recurrent neural network trained on Obama's footage to synthesize a high-quality mouth video of him speaking. My approach generates compelling and believable videos from audio that enable a range of important applications such as lip-reading for hearing-impaired people, video bandwidth reduction, and creating digital humans which are central to entertainment applications like special effects in films.
When: 20 Jun 2017 - 12:00pm Where: CSE 203

Hanchuan Li, General Examination

Title: Enabling Novel Sensing and Interaction with Everyday Objects using Commercial RFID Systems

Advisor: Shwetak Patel

Supervisory Committee: Shwetak Patel (Chair), Joan Sanders (GSR, Bioengineering), Josh Smith, and Alanson Sample (Disney Research)

Abstract: The Internet of Things (IoT) promises an interconnected network of smart devices that will revolutionize the way people interact with their surrounding environments. This distributed network of physical devices will open up tremendous opportunities for Human-Computer Interaction and Ubiquitous Computing, creating novel user-centered and context-aware sensing applications.

The advancement of IoT has been heavily focused on creating new and smart electronic devices, while the vast majority of everyday non-smart objects are left unchecked. There currently exists a huge gap between the collection of devices integrated to the IoT and the remaining massive number of everyday objects that people interact with in their daily living.

Radio-frequency Identification (RFID) has been widely adopted in the IoT industry as a standardized infrastructure. In this thesis proposal, I apply signal processing and machine learning techniques to low-level channel parameters of commercially available RFID tags to enable novel sensing and interaction with everyday objects.These new sensing capabilities allow for our system to recognize fine-grain daily activities, create tangible passive input devices, and enhance user experiences in human-robot interactions.

When: 13 Jun 2017 - 12:30pm Where: CSE 305

Ada Lerner, Final Examination

Title: Measuring and Improving Security and Privacy on the Web: Case Studies with QR Codes, Third-Party Tracking, and Archives

Advisors: Yoshi Kohno and Franzi Roesner

Supervisory Committee: Yoshi Kohno (co-Chair), Franzi Roesner (co-Chair), David McDonald (GSR, HCDE), and Arvind Krishnamurthy

Abstract: The web is deeply integrated into the economic, governmental, and social fabrics of our society, and it promises to help fulfill people’s diverse and critical needs easier, cheaper, faster, and more democratically. However, to fulfill these promises, we will need to study the security and privacy implications of the web and its surrounding technologies, and ensure that security and privacy are incorporated into their future evolution. I will present measurement and analysis work from my PhD examining the web and a set of the web’s sister technologies, forming insights and foundations for the ways that we can make the web more secure and privacy preserving. I identify and quantify current, past, and future properties of the web which are critical to its ability to provide security and privacy to its users, and synthesize lessons from these measurements and analyses which aim to guide the future designers and regulators of technologies surrounding the web in ensuring that it serves the needs of all who use it. Specifically, I will present longitudinal measurements of third-party web tracking using a novel measurement technique, as well as analyses of the security of web archives against malicious manipulation and of the global use of QR codes.

When: 9 Jun 2017 - 2:00pm Where: CSE 203

Eunsol Choi, General Examination

Title: Extracting, Inferring and Applying Knowledge about Entities

Advisors: Luke Zettlemoyer and Yejin Choi

Supervisory Committee: Luke Zettlemoyer (co-Chair), Yejin Choi (co-Chair), Emily M. Bender (GSR, Linguistics), and Dan Weld

Abstract: Real world entities such as people, organizations and countries play a critical role in text. Language contains rich explicit and implicit information about these entities, such as the categories they belong to, relationships to other entities, and events they participate in. To extract such knowledge, models must recognize entities from text and build representation for entities. However, this is challenging: even in a single document, different expressions (i.e., Google, the search giant, the company, it) point to the same entity and often knowledge can only be inferred from context without being explicitly stated.

In this work, we focus on entities to interpret text. Specifically, we introduce two approaches for extracting and inferring knowledge about entities. The first work focuses on entity-entity sentiment relationship, i.e., who feels positively (or negatively) towards whom, a task which requires reasoning about social dynamics of entities as well as targeted sentiment expressions. The second work infers free-form phrases that describe appropriate types for the target entity. For example, consider the sentences ``Bill robbed John. He was arrested." The noun phrases ``John" and ``he" have specific types, such as ``criminal", that can be inferred from context. For this prediction task, we introduce two novel sources of distant supervision: head words of noun phrases found with an automated parser, and types heuristically extracted from Wikipedia pages of linked named entities.

As a future direction, we propose to apply knowledge about entities to improve core NLP tasks such as coreference resolution and question answering. Lastly, we also propose to create summarizations focused on specific entities and examine how entity types affect the way media describes events they participate in.

When: 9 Jun 2017 - 1:00pm Where: CSE 303

Kenton Lee, General Examination

Title: End-to-end Neural Structured Prediction

Advisor: Luke Zettlemoyer

Supervisor Committee: Luke Zettlemoyer (Chair), Emily Bender (GSR, Linguistics), Yejin Choi, and Noah Smith

Abstract: Many core challenges in natural language understanding are formulated as structured prediction problems, and recently introduced neural modeling techniques have significantly improved performance in nearly every case.  However, these techniques can be challenging to develop, often requiring a trade-off between tractable inference and the use of neural representations for full output structures. We outline two approaches that make different trade-offs, as demonstrated by models for two different tasks. In CCG parsing, an A* parser improves accuracy by recursively modeling entire parse trees while maintaining optimality guarantees. In coreference resolution, where the model must scale to large documents, simpler inference is critical. Instead, an end-to-end span-level model that does not explicitly model clusters can produce state-of-the-art results.

When: 8 Jun 2017 - 3:30pm Where: CSE 503

Catherine Baker, Final Examination

Title: Understanding and Improving the Educational Experiences of Blind Programmers

Advisor: Richard Ladner

Supervisory Committee: Richard Ladner (Chair), Jennifer Turns (GSR, HCDE), Katharina Reinecke, Alan Borning, and Andy Ko (iSchool)

Abstract: Teaching people with disabilities tech skills empowers them to create solutions to problems they encounter. However, computer science is taught in a highly visual manner which can present barriers for people who are blind. The goal of this dissertation is to understand what those barriers are and to present the projects I have done to decrease those barriers.

The first projects I present look at the barriers that blind students face. I first present the results of my survey and interviews with blind graduates with degrees in computer science or related fields. This work highlights the many barriers that these blind students overcame when they were in college. We then follow-up on one of the barriers mentioned, access to technology, by doing a preliminary accessibility evaluation of six popular integrated development environments (IDEs) and code editors. I found that half were unusable and all have some inaccessible portions.

As access to visual information is one of the barriers in computer science education, I present three projects I have done to increase access to visual information in computer science. The first project is Tactile Graphics with a Voice (TGV). This project investigated an alternative to Braille labels for those that don’t know Braille and showed that TGV was a viable solution. The next project was StructJumper, which creates a modified abstract syntax tree which blind programmers can use to navigate through code. The results of the evaluation show that users could navigate quicker and easily determine the relationships of lines of code. The final project I present is a dynamic graph tool which has two different modes for handling focus changes when moving between graphs. I found that users can use both modes to answer questions about changes in graphs and want access to both modes plus others to be able to select the appropriate mode for the task.

These projects work towards the goal of making computer science education more accessible to blind students. By identifying the barriers that exist and creating solutions to overcome them, we can increase the number of blind students in computer science.

When: 8 Jun 2017 - 10:00am Where: CSE 305

Shu Liang, General Examination

Title: Virtualize me: Personalized 3D Head Reconstruction

Advisors: Linda Shapiro and Ira Kemelmacher

Supervisor Committee: Linda Shapiro (co-Chair), Irena Kemelmacher (co-Chair), John Kramlich (GSR, ME), and Brian Curless

Abstract: Personalized 3D face reconstruction has produced exciting results over the past few years. However, most methods focus solely on the face area and mask out the rest of the head. In this work, we propose a data-driven approach to obtaining a personalized reconstruction of a subject’s full head in the wild. We start with the visual hull shape gotten from a person’s Internet video and fill in the face details using the shading information extracted from the same person’s photo collections. The hairstyle is retrieved from a synthetic hairstyle dataset and deformed to fit the person’s rough hair shape. We will demonstrate the effectiveness of this algorithm on different celebrities using their photos and videos downloaded from the Internet.

When: 7 Jun 2017 - 2:00pm Where: CSE 203

James Ferguson, Qualifying Project Presentation

Title: Semi-Supervised Event Extraction with Paraphrase Clusters

Advisors: Hannaneh Hajishirzi and Dan Weld

Abstract: Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from various sources. If our system can make a high-confidence extraction of some mentions in a cluster of news articles, it can then leverage those mentions to identify additional occurrences of the event in the cluster. This allows us to collect a large amount of new, diverse training data for a specific event. Using this method, our experiments show an increase of 1.7 F1 points for a near-state-of-the-art baseline extractor on the ACE 2005 dataset.

When: 6 Jun 2017 - 2:00pm Where: CSE 303

Kanit Wongsuphasawat, General Examination

Title:  Mixed-Initiative Visualization Tools 
for Exploratory Data Analysis

Advisor: Jeff Heer

Supervisory Committee: Jeffrey Heer (Chair), Jevin West (GSR, iSchool), Bill Howe, and Jock Mackinlay (Tableau)

Abstract: Visual data analysis is an iterative process that involves both open-ended exploration and focused question answering. However, traditional visual analysis tools provide specification interfaces that are suitable only for question answering but are often tedious for exploration. This doctoral thesis investigates the design of visualization tools that augments traditional specification interfaces with automated chart specification to facilitate exploratory analysis. First, we present the design and controlled studies of Voyager, a system that couples faceted browsing with visualization recommendations to promote open-ended exploration. To facilitate both exploration and question answering, we present Voyager 2, a visual analysis tool that blends manual and automated specification in a unified system. To support specifications and recommendations in Voyager and Voyager 2, we also present the Vega-Lite visualization grammar and the CompassQL visualization query language, which defines an organized collection of charts. In current work, we plan to extend our interfaces and query engine to facilitate exploratory analysis of datasets with a larger number of fields.
When: 2 Jun 2017 - 1:30pm Where: CSE 503

Christopher Lin, Final Examination

Title: The Intelligent Management of Crowd-Powered Machine Learning 

Advisors: Dan Weld and Mausam

Supervisory Committee: Dan Weld (co-Chair), Mausam (co-Chair), Thomas Richardson (GSR, STAT), Eric Horvitz (MSR), and Carlos Guestrin

Abstract: Artificial intelligence and machine learning powers many technologies today, from spam filters to self-driving cars to medical decision assistants. While this revolution has hugely benefited from developments in core areas like deep learning, it also could not have occurred without data, which nowadays is frequently procured at massive scale from crowds. Because data is so crucial, a key next step towards truly autonomous agents is the design of better methods for intelligently managing the now-ubiquitous crowd-powered data-gathering process. 

This dissertation takes this key next step by developing algorithms for the online and dynamic control of data acquisition. We consider how to gather data for its two primary and independent purposes: training and evaluation.

In the first part of the dissertation, we develop algorithms for obtaining data for testing. The most important requirement of testing data is that it must be extremely clean. Thus to deal with noisy human annotations, machine learning practitioners typically rely on careful workflow design and advanced statistical techniques for label aggregation. A common process involves designing and testing multiple crowdsourcing workflows for their tasks, identifying the single best-performing workflow, and then aggregating worker responses from redundant runs of that single workflow. We improve upon this process in two ways: we build a control models that allow for switching between many workflows depending on how well a particular workflow is performing for a given example and worker; and we build a control model that can aggregate labels from tasks that do not have a finite predefined set of multiple choice answers (\eg\ counting tasks.)
We then implement agents that use our new models to dynamically choose whether to acquire more labels from the crowd or stop, and show that they can produce higher quality labels at a cheaper cost than the state-of-the-art baselines.

In the second part of the dissertation, we shift to tackle the second purpose of data: training.
Because learning algorithms are often robust to noise, instead of optimizing for accuracy like test sets, training sets can make tradeoffs between quantity, accuracy, and diversity. We first investigate the tradeoff between quantity and accuracy, in which given a fixed budget, one can spend money on acquiring cleaner labels, or one can spend money acquiring more examples. We survey how inductive bias, worker accuracy, and budget affect whether a larger and noisier training set or a smaller and cleaner one will train better classifiers. We then set up a formal framework for dynamically choosing the next example to label or relabel by generalizing active learning to allow for relabeling, which we call re-active learning, and we design new algorithms for re-active learning that outperform active learning baselines. Finally, we consider the effect of domain skew on strategies for increasing label diversity in a training set. We introduce an algorithm that dynamically switches between crowd generation and crowd labeling and show that it achieves better performance than state-of-the-art baselines across several domains with varying skew.

When: 2 Jun 2017 - 10:00am Where: CSE 303

Luheng He, General Examination

Title: Annotating, Learning, and Representing Shallow Semantic Structure

Advisor: Luke Zettlemoyer

Supervisory Committee: Luke Zettlemoyer (Chair), Gina-Anne Levow (GSR, Linguistics), Yejin Choi, and Noah Smith

Abstract: One key challenge to understanding human language is to find out the word to word semantic relations, such as “who does what to whom”, “when”, and “where”.  Semantic role labeling (SRL) is the widely studied challenge of recovering such predicate-argument structure, typically for verbs. SRL is designed to be consistent across syntactic annotation and to some extent, language independent, which can potentially benefit downstream applications such as natural language inference, machine translation and summarization. However, the performance of SRL system is limited by the amount of training data, further impeding its usage in downstream applications. My generals work is three folds: 1) collecting more SRL data using natural language driven QA annotation, 2) using end-to-end neural models to accurately predict SRL, and 3) proposing a unified framework for predicting and representing SRL structure to improve performance on downstream applications.

When: 2 Jun 2017 - 10:00am Where: CSE 678

Alex Polozov, Final Examination

Title: A Framework for Mass-Market Inductive Program Synthesis

Advisors: Zoran Popovic and Sumit Gulwani

Supervisor Committee: Zoran Popovic (co-Chair), Sumit Gulwani (co-Chair), Gary Hsieh (GSR, HCDE), Emina Torlak, and Ras Bodik

Abstract: Programming by examples (PBE), or inductive program synthesis, is a problem of finding a program in the underlying domain-specific language (DSL) that is consistent with the given input-output examples or constraints. In the last decade, it has gained a lot of prominence thanks to the mass-market deployments of several PBE-based technologies for data wrangling — the widespread problem of transforming raw datasets into a structured form, more amenable to analysis.

However, deployment of a mass-market application of program synthesis is challenging. First, an efficient implementation requires non-trivial engineering insight, often overlooked in a research prototype. This insight takes the form of domain-specific knowledge and heuristics, which complicate the implementation, extensibility, and maintenance of the underlying synthesis algorithm. Second, application development should be fast and agile, tailoring to versatile market requirements. Third, the underlying synthesis algorithm should be accessible to the engineers responsible for product maintenance.
In this work, I show how to generalize the ideas of 10+ previous specialized inductive synthesizers into a single framework, which facilitates automatic generation of a domain-specific synthesizer from the mere definition of the corresponding DSL and its properties. PROSE (PROgram Synthesis using Examples) is the first program synthesis framework that explicitly separates domain-agnostic search algorithms from domain-specific expert insight, making the resulting synthesizer both fast and accessible. The underlying synthesis algorithm pioneers the use of deductive reasoning for designer-defined domain-specific operators and languages, which enables mean synthesis times of 1-3 sec on real-life datasets.
Over the last two years, a dedicated team at Microsoft has built and deployed 10+ technologies on top of the PROSE framework. Using them as case studies, I examine the user interaction challenges that arise after a mass-market deployment of a PBE-powered application. In this talk, I'll show how expressing program synthesis as an interactive problem facilitates user intent disambiguation, incremental learning from additional examples, and increases the users' confidence in the system.
When: 2 Jun 2017 - 9:00am Where: CSE 503

Paul Vines, Final Examination

Title: Surveillance Self-Defense - from Solutions to New Problems

Advisors: Yoshi Kohno and Franzi Roesner

Supervisory Committee: Yoshi Kohno (co-Chair), Franzi Roesner (co-Chair), Emma Spiro (GSR, iSchool), and Zach Tatlock

Abstract: The world is increasingly connected; a side effect of this digital connectivity is that surveillance has become easier and more constant than ever. Individuals can be surveilled communicating with friends or colleagues, tracked as they visit websites, and advertised to based on an amazing variety of fine-grained data.

In this work, I describe a series of forays into research into modern surveillance and user self-defense against it:

(1) A covert communication system using online games that allows individuals to hold a remote conversation without revealing contents or even the existence of the conversation. Our system withstand state-of-the-art attacks on covert channels.

(2) An evaluation of how current privacy defenses on the web actually perform in preventing web-tracking, including analysis of a wide range of privacy defenses from multiple types of data.

(3) An exploration of a new kind of surveillance: intelligence gathering using online advertising - or ADINT. Online advertisers are in a privileged network position, able to collect, store, and analyze data on billions of people: we investigate how this can be used to perform fine-grained location tracking and more.

When: 31 May 2017 - 3:15pm Where: CSE 303

James Bornholt, General Examination

Title: Scaling the Symbolic Reasoning Stack

Advisors: Emina Torlak, Dan Grossman, and Luis Ceze

Supervisory Committee: Emina Torlak (co-Chair), Dan Grossman (co-Chair), Luis Ceze (co-Chair), Andy Ko (GSR, iSchool), Xi Wang, and Ras Bodik

Abstract: Symbolic reasoning tools, including those based on program synthesis, have successfully solved challenging tasks in a variety of application domains, from compilers to consistency models. But developing a symbolic reasoning tool that scales to large problems requires considerable expertise—implementing search procedures from scratch and a thorough understanding of the underlying engine—that limits the reach of synthesis technology. My thesis is that new abstractions and interfaces can help programmers to build scalable symbolic reasoning tools. I propose three contributions to scale the symbolic reasoning stack: metasketches to specify search strategies to a reusable synthesis engine, symbolic profiling to understand and optimize the behavior of a symbolic tool, and profile-guided symbolic execution to automatically optimize compilation to constraints. I have demonstrated the effectiveness of these contributions in several domains, including a new tool for synthesizing memory consistency models.

When: 31 May 2017 - 2:00pm Where: CSE 305

Eric Zeng, Qualifying Project Presentation

Title: End User Security and Privacy Concerns with Smart Homes

Advisors: Franzi Roesner and Yoshi Kohno

Abstract: The Internet of Things (IoT) is becoming increasingly widespread in home environments. Consumers are transforming their homes into smart homes, with internet-connected sensors, lights, appliances and locks, controlled by voice or other user-defined automations. Security experts have identified numerous concerns with IoT and smart homes, including vulnerable and unreliable devices, and the privacy risks of internet-connected sensors in the home. These concerns are supported by recent high profile attacks, such as the Mirai DDoS attacks.

However, little work has studied the security and privacy concerns of end users who live with smart home systems. To bridge this gap, we conduct semi-structured interviews with 15 people living in smart homes (12 smart home administrators and 3 other residents) to learn about how they use their smart homes, and understand their security and privacy related attitudes, expectations, and behaviors. Among other findings, we identify gaps in threat models arising from limited technical understanding of smart homes, awareness of some security issues but limited concern, ad hoc mitigation strategies, and a mismatch between the concerns and power of the smart home administrator and other people in the home. From these and other findings, we distill recommendations for the designers of smart home devices and platforms, as well as for end users, and we identify opportunities for future research.

When: 31 May 2017 - 12:00pm Where: CSE 303

Amanda Swearngin, Qualifying Project Presentation

Title: Genie: Input Retargeting on the Web through Command Reverse Engineering

Advisors: Andy Ko and James Fogarty

Abstract: Most web applications are designed as one-size-fits-all, despite considerable variation in people’s physical abilities, expertise, and other factors that impact interaction. For
example, some web applications require the use of a mouse, precluding use by many people with severe motor disabilities. Others may require laborious manual input that
a skilled developer could automate if the application were scriptable.

In this work, we present Genie, a system that automatically reverse engineers an abstract model of the underlying commands in a web application, then enables
interaction with that functionality through alternative interfaces and other input modalities (e.g., speech, keyboard, or command line input). Genie comprises an abstract model
of command properties, behaviors, and dependencies as well as algorithms that reverse engineer this model from an existing web application through static and dynamic program
analysis. We evaluate Genie by developing several interfaces that automatically add support for speech, keyboard, and command line input to arbitrary web applications.

When: 31 May 2017 - 9:00am Where: CSE 503

Ezgi Mercan, Final Examination

Title: Digital Pathology: Diagnostic Errors, Viewing Behavior and Image Characteristics

Advisor: Linda Shapiro

Supervisory Committee: Linda Shapiro (Chair), Mark Ganter (GSR, ME), Su-In Lee, and Joann Elmore (Public Health)

Abstract: Whole slide imaging technologies provide a unique opportunity to collect and analyze large amounts of data on pathologists' interactions with the digital slide. In our work, we are studying the underlying causes of diagnostic errors in histopathology. Instead of focusing on the detection of invasive cancer, we consider the full-spectrum of diagnoses that a pathologist encounters during clinical practice and aim to study misidentification and misinterpretation errors that may cause overdiagnoses or underdiagnoses. To this end, we use the digiPATH dataset that consists of 240 breast biopsies with diagnoses ranging from benign to invasive cancer, the actions of pathologists recorded during their interpretations of the slides and the diagnostic regions associated with the final diagnoses they assigned. Our work consists of three parts: region of interest localization, diagnostic classification and viewing behavior analysis. 

In this talk, I will introduce a novel methodology to extract the diagnostically relevant regions of interest from pathologists' viewing behavior, and a computer vision model to detect these regions automatically on unseen images. Then, I will talk about our work on the diagnostic classification of these regions of interest, and describe our features based on tissue label segmentations of the images. Our features include a new kind of image feature defined in for the first time in this work: a sequence of histograms that together comprise the structure feature that can differentiate atypia and DCIS diagnoses more accurately than the pathologists. Finally, I will focus on two pieces of work that analyze the interpretation patterns of the pathologists on the whole slide images: I will first talk about the relationship between the identification of the correct ROI and the diagnosis. Then, I will introduce novel measurements of interpretative patterns and identify two strategies used by the pathologists: scanning and drilling and discuss how these strategies affect the diagnostic decision making process.

When: 26 May 2017 - 10:00am Where: CSE 303

Alex Mariakakis, General Examination

Title: Using Mobile Devices to Quantify Traditionally Qualitative Health Measures

Advisors: Shwetak Patel and Jacob Wobbrock

Supervisory Committee: Shwetak Patel (co-Chair), Jacob Wobbrock (co-Chair), Katherine Steele (GSR, ME), James Fogarty, and Kurtis Heimerl

Abstract: The path to getting treatment for a medical condition typically begins with an observation made with one of the five human senses, whether by a physician or the patient themselves.

The person may feel warm to the touch, hear a wheezing in their cough, or see that their skin is pale. These observations are often qualitative and subjective, which can lead an untrained person to ignore their own symptoms and neglect treatment until their condition worsens.
Although medical devices exist for quantifying such observations or detecting their underlying causes, they are often not deployed to the general population due to cost and burden of use. For my thesis, I propose that the qualitative health measures convenient for some can be made quantitative for all with little additional burden by examining the ways that medical specialists use their human senses and replicating them through smartphone sensors. My work focuses on how the smartphone camera can be used in place of visual inspection to quantify diagnostic observations of the eye. These projects cover a variety of medical issues, including glaucoma, traumatic brain injuries, and pancreatic cancer. After I describe my projects in detail, I outline the work I expect to complete for my dissertation. Beyond iterating on my current projects, this plan includes interviews where I hope to gather design recommendations from different stakeholders in mobile health (e.g., doctors, physicians, patients, and non-patients) and understand their perspectives on how and when smartphones should be used for diagnosis.
When: 25 May 2017 - 1:00pm Where: CSE 303

Mandar Joshi, Qualifying Project Presentation

Title: TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

Advisors: Dan Weld and Luke Zettlemoyer

Abstract: We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study.

When: 23 May 2017 - 12:00pm Where: CSE 303

Philip Cho, Qualifying Project Presentation

Title: Speeding up gradient boosting for training and prediction

Advisors: Carlos Guestrin and Arvind Krishnamurthy

Abstract: Gradient boosting [3] is one of most popular algorithms due to its simplicity and predictive performance. The algorithm produces an ensemble of decision trees, which has many desirable properties as a statistical model. XGBoost [1] is a fast, scalable implementation of gradient boosting that has found a wide following among hobbyists and practitioners. This paper presents our recent contribution for fast training and prediction. Using a series of optimizations such as feature quantization and histogram aggregation of gradients, we accelerate training by 2-4 times without loss of model accuracy. We add support for multiple tree-growing strategies, allowing for potentially unbalanced trees in pursuit of faster loss reduction. In addition, we add a dedicated framework for prediction, in which XGBoost emits generated code that can be used in isolation by user applications. Applications would no longer have to link XGBoost in order to make predictions.

When: 19 May 2017 - 9:00am Where: CSE 303

Zuoming Shi, Qualifying Project Presentation

Title: The Impact of Optional Game Mechanics and Email Notifications on Player Contributions and Return Rate in Mozak

Advisors: Zoran Popović and Katharina Reinecke

Abstract: We describe the effect of optional tutorial components and email notifications on the retention rate and useful contributions of players in the neuroscience crowd-sourcing game Mozak. We performed between-subject experiments with data collected from 7,000 online players and 1400 registered players, and measured time played, amount of scientific contributions made, and return rate. We found that framing mandatory tutorials as optional game mechanics had positive effects on overall player contribution and return rate, but may decrease time played for new players. Email notifications was found to increase return rate, but not player contributions. Our results suggest that framing tutorials and optional game mechanics can be highly beneficial for online crowd-sourcing games. In addition, our work indicates that single-session play-time is not a reliable metric for evaluating player engagement, as it did not strongly correlate with other metrics previously attributed to player engagement.

When: 18 May 2017 - 3:00pm Where: CSE 403

Koosha Khalvati, Qualifying Project Presentation

Title: A Probabilistic Model of Social Decision Making based on Reward Maximization
Advisors: Rajesh Rao and Andrea Stocco (I-LABS)

Abstract: A fundamental problem in cognitive neuroscience is how humans make decisions, act, and behave in relation to other humans. Here we adopt the hypothesis that when we are in an interactive social setting, our brains perform Bayesian inference of the intentions and cooperativeness of others using probabilistic representations. We employ the framework of partially observable Markov decision processes (POMDPs) to model human decision making in a social context, focusing specifically on the volunteer’s dilemma in a version of the classic Public Goods Game. We show that the POMDP model explains both the behavior of subjects as well as neural activity recorded using fMRI during the game. The decisions of subjects can be modeled across all trials using two interpretable parameters. Furthermore, the expected reward predicted by the model for each subject was correlated with the activation of brain areas related to reward expectation in social interactions. Our results suggest a probabilistic basis for human social decision making within the framework of expected reward maximization.
When: 18 May 2017 - 2:30pm Where: CSE 203

Younghoon Kim, Qualifying Project Presentation

Title: GraphScape: A Model for Automated Reasoning about Visualization Similarity and Sequencing

Advisors: Jeffrey Heer and Jessica Hullman

Abstract: We present GraphScape, a directed graph model of the vi- sualization design space that supports automated reasoning about visualization similarity and sequencing. Graph nodes represent grammar-based chart specifications and edges rep- resent edits that transform one chart to another. We weight edges with an estimated cost of the difficulty of interpreting a target visualization given a source visualization. We con- tribute (1) a method for deriving transition costs via a partial ordering of edit operations and the solution of a resulting lin- ear program, and (2) a global weighting term that rewards consistency across transition subsequences. In a controlled experiment, subjects rated visualization sequences covering a taxonomy of common transition types. In all but one case, GraphScape’s highest-ranked suggestion aligns with subjects’ top-rated sequences. Finally, we demonstrate applications of GraphScape to automatically sequence visualization presen- tations, elaborate transition paths between visualizations, and recommend design alternatives (e.g., to improve scalability while minimizing design changes). 

When: 17 May 2017 - 3:00pm Where: CSE 305

Spencer Pearson, Qualifying Project Presentation

Title: Evaluating and Improving Fault Localization Techniques

Advisors: Michael Ernst and Zach Tatlock

Abstract: Fault localization (FL) is the process of identifying which snippets of code need to be changed in order to fix a bug. Automated FL techniques typically instrument the faulty program, run the test suite, and search for statements that are disproportionately important to the failing tests' executions. Many such techniques have been proposed, but evaluations almost invariably focus on artificial faults, created by deliberately inserting faults into correct programs; these are not representative of real faults, and nobody has investigated whether they are good proxies for real faults when evaluating FL techniques.

We evaluated several previously-studied FL techniques on both artificial and real faults, and found that while previous results mostly replicated on artificial faults, most techniques performed almost indistinguishably on real faults. To discover what would make a FL technique do better on real faults, we identified a common structure behind all the techniques we considered, exhaustively analyzed all combinations of the different techniques' parameters, and found the causes of the most common failure modes. From this information, we were able to construct several new techniques that localize faults more effectively than any of the previous FL techniques we studied.

When: 17 May 2017 - 12:30pm Where: CSE 305

Yue Zhang, Qualifying Project Presentation

Title: Analyzing and Predicting Alternative Splicing With Convolutional Neural Networks

Advisors: Georg Seelig and Larry Ruzzo

Abstract: Alternative splicing is a biological process by which a single mRNA transcript can be spliced into any of several forms, coding for proteins that can have different functions. This process is regulated by many factors, and its disregulation has been implicated in a number of diseases.

We focus on two main problems: discovering sequence motifs that affect alternative splicing, and predicting the effects of mutations on alternative splicing in vivo. Using a synthetic dataset, we trained a variety of neural network models in order to predict the effects of the exonic DNA sequence on splicing. Then, these models were used to derive sequence motifs that contribute to alternative splicing regulation across different cell types. For the prediction step, we tested the predictive power of these models as applied to a dataset of genomic mutations, and compared them with previous models. We will also discuss the construction of a database for alternative splicing events in the genome.

When: 15 May 2017 - 1:00pm Where: CSE 403

Kaiyuan Zhang, Qualifying Project Presentation

Title: Whole-system security with machine-checked verification

Advisors: Xi Wang, Arvind Krishnamurthy, and Emina Torlak

Abstract: Chimera is a new framework for developing applications with formal, whole-system security guarantees. It employs a systematic approach to decomposing a security property and using a portfolio of theorem provers to discharge the result- ing proof obligations. It allows programmers to specify and verify cryptographic security properties about their applica- tions at the level of assembly code, without trusting the com- piler or runtime; it also allows programmers to invoke auto- mated provers when possible. To further reduce the proof burden, Chimera introduces a security kernel, which allows verified applications to safely reuse unverified components (e.g., off-the-shelf Linux) through sandboxing.

We have implemented Chimera for 64-bit ARM, and used it to build a set of five Chimera applications: a chat client, as well as a notary app, an incrementer app, a password hasher, and a differential privacy service ported from the Ironclad project. Our experience shows that these applications built using Chimera can provide both functional correctness and cryptographic security properties through formal verifica- tion, with a moderate proof effort.

When: 15 May 2017 - 11:00am Where: CSE 503

Katie Doroschak, Qualifying Project Presentation

Title: Effects of read start bias on discovering allele-specific expression in RNA-seq

Advisors: Larry Ruzzo and Sreeram Kannan (EE)

Abstract: Allele-specific expression (ASE) is a biological phenomenon in which one half of the genome at a particular location is expressed more than the other. Computational methods and data science enable discovery of ASE in RNA-seq data. If an algorithm can computationally identify regions where this is occurring differently in two organisms of the same species, ASE could be used as a marker for a cross between them. Technical artefacts from the sequencing process must also be corrected for computationally. Read start bias is one such artefact that has not been evaluated with respect to ASE. In this work, I identify regions exhibiting ASE and evaluate the effects of read start bias on the ability to discover them. This is achieved with the AlleleSeq pipeline (Rozowsky et al. Molecular systems biology, 2011) with bias correction via SeqBias (Jones et al. Bioinformatics, 2012), and applied to RNA-seq data from an interesting marine organism called Thalassiosira pseudonana. It is theorized that some strains of T. pseudonana have undergone a lapse in part of their reproductive cycle, and I evaluate the potential of using ASE as a marker to study this question.

When: 15 May 2017 - 10:30am Where: CSE 303

Max Forbes, Qualifying Project Presentation

Title: Verb Physics: Relative Physical Knowledge of Actions and Objects

Advisors: Yejin Choi and Noah Smith

Abstract: Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., ``My house is bigger than me.'' However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, ``Tyler entered his house'' implies that his house is bigger than Tyler.

In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance.

When: 12 May 2017 - 11:00am Where: CSE 503

Patrick Lancaster, Qualifying Project Presentation

Title: Improved Object Pose Estimation via Deep Pre-touch Sensing

Advisors: Joshua Smith and Dieter Fox

Abstract: For certain manipulation tasks, pose estimation from head-mounted cameras may not be sufficiently accurate due to the inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibation of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we utilize a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. We find that after a sequence of scans, the object can typically be localized to within 0.5 cm of its true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.

When: 12 May 2017 - 11:00am Where: CSE 303

Christopher Xie, Qualifying Project Presentation

Title: A Unified Framework for Cold Start Time Series Forecasting in the Presence of Missing Data

Advisors: Emily Fox and Sham Kakade

Abstract: Providing long-range forecasts is a fundamental challenge in time series modeling, which is only compounded by the challenge of having to form such forecasts when a time series has never previously been observed. The latter challenge is the time series version of the cold start problem seen in recommender systems which, to our knowledge, has not been directly addressed in previous work. Time series models are also often plagued by missing data and high-dimensionality (i.e., large collections of observed series), making them ill-suited to the typical structure of big data time series. We provide a unified framework for producing long- range forecasts even when the series has missing values or was previously unobserved; the same framework can be used to impute missing values. Key to the formulation and resulting performance is (1) leveraging repeated patterns over fixed periods of time and across series, and (2) metadata associated with the individual series. We provide an analysis of our framework on web traffic in a Wikipedia dataset.

When: 11 May 2017 - 12:30pm Where: CSE 403

Eddie Yan, Qualifying Project Presentation

Title: Customizing Progressive JPEG for Efficient Image Storage

Advisors: Luis Ceze and Xi Wang

Abstract: Modern image storage services, especially those associated with social media services, host massive collections of images. These images are often replicated at many different resolutions to support different devices and contexts, incurring substantial capacity overheads. We describe a prototype system design, ShoeBox, to investigate the effectiveness of replacing redundant multi-resolution image storage with progressively encoded JPEG images. After evaluating ShoeBox’s design and performance, we switch to an approach better suited to the capabilities of progressive JPEG which instead resizes images at request time. We describe methods of handling the inefficiencies associated with resizing images on the fly, and propose repurposing the progressive JPEG standard and customizing the organization of image data to reduce the bandwidth overheads of dynamic resizing. We show that at a PSNR of 32 dB, dynamic resizing with progressive JPEG provides 3× read data savings over baseline JPEG, and that progressive JPEG with customized encode parameters can further improve these savings to almost 6×. Additionally, we characterize the decode overheads of progressive JPEG to assess the feasibility of directly decoding progressive JPEG images on energy-limited devices. Our approach does not require modifications to current JPEG software stacks.

When: 10 May 2017 - 1:00pm Where: CSE 615 (Sampa Lab)

Lucy Simko, Qualifying Project Presentation

Title: Recognizing and Imitating Programmer Style: Adversaries in Program Authorship Attribution

Advisors: Yoshi Kohno and Luke Zettlemoyer

Abstract: Source code attribution classifiers have recently become powerful. We consider the possibility that an adversary could craft code with the intention of causing a misclassification, i.e., creating a forgery of another author's programming style in order to hide the forger's own identity or blame the other author. We find that it is possible for a non-expert adversary to defeat such a system. In order to inform the design of adversarially resistant source code attribution classifiers, we conduct two studies with C/C++ programmers to explore the potential tactics and capabilities both of such adversaries and, conversely, of human analysts doing source code authorship attribution. Through the quantitative and qualitative analysis of these studies, we (1) evaluate a state of the art machine classifier against forgeries, (2) evaluate programmers as human analysts/forgery detectors, and (3) compile a set of modifications made to create forgeries. Based on our analyses, we then suggest features that future source code attribution systems might incorporate in order to be adversarially resistant.

When: 10 May 2017 - 12:00pm Where: CSE 503

Rahul Nadkarni, Qualifying Project Presentation

Title: Sparse plus Low-rank Graphical Models of Time Series for Functional Connectivity in MEG

Advisors: Emily Fox and Rajesh Rao

Abstract: A fundamental problem in neuroscience is inferring the functional connections between brain regions that underlie cognitive behaviors such as vision, speech, and audition. Magnetoencephalography (MEG) has become a popular neuroimaging technique for studying these functional connectivity networks. However, unlike typical approaches for learning these networks from data, we treat the MEG signals as time series rather than independent observations. We represent the functional connectivity network through conditional independence statements between signals on the cortical surface, encoded via a graphical model of time series. Importantly, we extend previous techniques for learning graphs of time series by incorporating a low-rank component, accounting for latent signals that would otherwise lead to inferring spurious connections. We evaluate the model on synthetic data as well as real MEG data collected from an auditory attention task.

When: 8 May 2017 - 11:30am Where: CSE 503

Emily Furst, Qualifying Project Presentation

Title: Profiling General Purpose GPU Applications

Advisors: Mark Oskin and Bill Howe

Abstract: General Purpose computing on Graphics Processing Units (GPGPU) has become an increasingly popular option for accelerating a wide variety of applications. However, GPUs are not well-suited for all types of applications as data transfer costs can often dominate execution.

We develop a methodology for quantifying how well applications utilize GPU architectures using proprietary profiling tools. By aggregating various profiling metrics, we break down the different aspects that comprise occupancy on the GPU across the runtime of application execution.
We show results for a GPU database implementation, several DOE miniapps, and TensorFlow.
We show that for most of the applications we studied, especially the database and neural net implementations, only a small minority of execution time is spent on the GPU. We further show that even in cases with seemingly good performance, a large portion of the achieved occupancy can actually be attributed to stalls and scalar instructions.

When: 3 May 2017 - 1:00pm Where: CSE 615 (Sampa Lab)

Jingjing Wang, General Examination

Title: Runtime Optimizations for Large-Scale Data Analytics

Advisor: Magdalena Balazinska

Supervisory Committee: Magdalena Balazinska (Chair), Bingni Brunton (GSR, Biology), Dan Suciu, Alvin Cheung, and Arvind Krishnamurthy

Abstract: Large-scale data analytics benefits from optimizations from many aspects. One category of optimizations is hard to be performed statically since they rely on information that is only available during runtime. We show that these runtime optimizations have large impact on application performance. Specifically, we investigate the problem from the following aspects: execution models for iterative queries in shared-nothing architectures, elastic memory management for cloud data analytics, and real-time visual applications. We summarize our work on the first two subjects and propose our plan for the third project.

When: 1 May 2017 - 1:00pm Where: CSE 405

Danielle Bragg, General Examination

Title: Making Visual Language Accessible: A Crowdsourced Approach

Advisors: Richard Ladner

Supervisory Committe: Richard Ladner (Chair), Jaime Snyder (GSR, iSchool), Meredith June Morris, Alan Borning, and Katharina Reinecke

Abstract: This talk presents research to make visual language more accessible to sign language users and low-vision readers. Worldwide, there are about 70 million deaf people using a sign language as their first language, and almost 300 million people with visual impairments. However, much of society and the technical world communicates through written text. This excludes many people from full participation, as sign languages lack a standard written form, and low-vision readers struggle to access text visually.

In this talk, I will demonstrate how to design, build, and evaluate systems that include deaf and low-vision people in our text-based communication platforms by providing equal access to visual language. These solutions are powered by data-driven approaches, but it can be challenging to collect sufficient data from minority disabled populations. Our methods overcome data scarcity challenges by making use of other populations, such as student language learners and crowdsourcing workers. In this talk, I will present three projects that embody this approach: 1) a crowdsourced American Sign Language (ASL) dictionary, 2) smartfonts and livefonts, alternate scripts that improve legibility for low-vision English readers, and 3) the first animated ASL character system.

When: 1 May 2017 - 10:00am Where: CSE 303

Fereshteh Sadeghi, General Examination

Title: Sim2Real Collision Avoidance for Indoor Navigation of Mobile Robots via Deep Reinforcement Learning

Advisor: Sergey Levine

Supervisory Committee: Sergey Levine (Chair), Behcet Acikmese (GSR, Aeronautics and Astronautics), Steve Seitz, Emo Todorov, and Larry Zitnick (Facebook)

Abstract: Deep reinforcement learning has emerged as a promising technique for automatically acquiring control policies that can process raw sensory inputs and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. Here, we explore the following question: can we train vision-based navigation policies entirely in simulation, and then transfer them into the real world without a single real training instance? We propose a learning method that we call (CAD)2RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models. Our learned collision avoidance policy is represented by a deep convolutional neural network that directly processes raw monocular images and outputs velocity commands. This policy is trained entirely in simulation, with a Monte Carlo policy evaluation algorithm that directly optimizes the network’s ability to produce collision-free path. By highly randomizing the rendering settings for our simulated training set, we show that we can train a policy that generalizes to the real world.

When: 25 Apr 2017 - 1:30pm Where: Sieg 322

Chenglong Wang, Qualifying Project Presentation

Title: Synthesizing Highly Expressive SQL Queries from Input-Output Example

Advisors: Ras Bodik and Alvin Cheung

Abstract: SQL is the de facto language for manipulating relational data. Though powerful, SQL queries can be difficult to write due to their highly expressive constructs. Using the programming-by-example paradigm to help users write SQL queries presents an attractive proposition, as evidenced by online help forums such as Stack Overflow. However, developing techniques to synthesize SQL queries from input-output (I/O) examples has been difficult due to SQL's rich set of operators.

We present a new scalable and efficient algorithm to synthesize SQL queries from I/O examples. Our key innovation is the development of a language for abstract queries, i.e., queries with uninstantiated operators, that can express a large space of SQL queries efficiently. Using abstract queries to represent the search space nicely decomposes the synthesis problem into two tasks: (1) searching for abstract queries that can potentially satisfy the given I/O examples, and (2) instantiating the found abstract queries and ranking the results.
We implemented the algorithm in a new tool, called Scythe, and evaluated it on 193 benchmarks collected from Stack Overflow. Our results showed that Scythe efficiently solved 74% of the benchmarks, most in just a few seconds.
When: 25 Apr 2017 - 12:30pm Where: CSE 403

Kiana Ehsani, Qualifying Project Presentation

Title: SeGAN: Segmenting and Generating the Invisible

Advisors: Ali Farhadi and Roozbeh Mottaghi

Abstract: Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.

When: 18 Apr 2017 - 1:30pm Where: CSE 403

Victoria Lin, Qualifying Project Presentation

Title: Program synthesis from natural language using recurrent neural networks

Advisors: Luke Zettlemoyer and Michael Ernst

Abstract: Even if a competent programmer knows what she wants to do and can describe it in English, it can still be difficult to write code to achieve the goal. Existing resources, such as question-and-answer websites, tabulate specific operations that someone has wanted to perform in the past, but they are not effective in generalizing to new tasks, to compound tasks that require combining previous questions, or sometimes even to variations of listed tasks.

Our goal is to make programming easier and more productive by letting programmers use their own words and concepts to express the intended operation, rather than forcing them to accommodate the machine by memorizing its grammar. We have built a system that lets a programmer describe a desired operation in natural language, then automatically translates it to a programming language for review and approval by the programmer. Our system, Tellina, does the translation using recurrent neural networks (RNNs), a state-of-the-art natural language processing technique that we augmented with slot (argument) filling and other enhancements.

We evaluated Tellina in the context of shell scripting. We trained Tellina's RNNs on textual descriptions of file system operations and bash one-liners, scraped from the web. Although recovering completely correct commands is challenging, Tellina achieves top-3 accuracy of 80% for producing the correct command structure. In a controlled study, programmers who had access to Tellina outperformed those who did not, even when Tellina's predictions were not completely correct, to a statistically significant degree.

When: 28 Mar 2017 - 12:30pm Where: CSE 691

Scott Lundberg, General Examination

Title: Actionable machine learning in genetics, personalized medicine, and health

Advisor: Su-In Lee

Supervisory Committee: Su-In Lee (Chair), Meliha Yetisgen (GSR, Biomedical Informatics), Ali Shojaie (Biostatistics, Public Health), and Walter Ruzzo

Abstract: Modern machine learning is changing how we approach biology, medicine, and health. However, a key challenge is to build models that produce actionable insights rather than opaque predictions. To support this we propose four methods that learn interpretable, and hence actionable, machine learning models:

1) ChromNet learns the network structure of interactions among proteins in the human cell nucleus, and has provided effective visualization of these results to over 5,000 users so far. This allows us to understand which proteins collaborate to manage our DNA.

2) SHAP unifies several methods designed to provide additive explanations of any machine learning model. It uses an axiom-based approach to justify why a single explanation can be applied to models of any type.

3) Prescience learns an ensemble model for the prediction of hypoxia in the operating room and then uses a SHAP-based approach to explain the prediction. This demonstrates the use of a complex model while still allowing a doctor to understand what causes a patient's risk.

4) Study Flow will enable us to combine multiple models from different studies while retaining the relevant causal assumptions. By encoding the causal assumptions of each model, we can automated large parts of the systematic review process.

When: 17 Mar 2017 - 3:00pm Where: CSE 128

Lauren Milne, General Examination

Title: Making Education More Accessible for Blind Children Using Touchscreens

Advisor: Richard Ladner

Supervisory Committee: Richard Ladner (Chair), Katherine Steele (GSR, ME), Steve Tanimoto, James Fogarty, and Andy Ko (iSchool)

Abstract: Many digital educational tools for elementary school-aged children are highly visual in nature. While using graphics as opposed to purely text can make the applications more appealing and easier to use for children that are developing literacy skills, it also makes them largely inaccessible for blind children. For my dissertation, I propose using touchscreen devices such as smartphones and tablets to make two kinds of educational applications, early literacy and early programming, accessible to blind children. In my previous work, I explored using the spatial layout of a smartphone to create games to teach Braille characters using haptic and audio feedback. In my current and future work, I am studying how to make block-based programming languages, used to teach programming concepts to children, and their accompanying output accessible to blind children using touchscreens.  

When: 13 Mar 2017 - 2:00pm Where: CSE 303

Trevor Perrier, General Examination

Title: Connecting End Users to Domain Experts With Universal Mobile Phones Services

Advisor: Richard Anderson

Supervisory Committee: Richard Anderson (Chair), Sarah Odell Gimbel-Sherr (GSR, Global Health), Kurtis Heimerl, and Katharina Reinecke

Abstract: The potential for the communication services available on any mobile phone to engage users and facilitate behavior change has been well documented. Designing programs for individual empowerment and behavior change using SMS and voice services allows projects to reach marginalized segments of the global population. The majority of mobile messaging services for development are either one-way push messaging or use fully automated bidirectional end points. While these projects are easy to scale, reaching large numbers of people with limited human resources, they have difficulty addressing personalized needs. In addition as the size of a project increases it becomes important to plan and adapt to unexpected health concerns and outcomes. In this work we examine the implications, limitations, and feasibility of semi-automated bi-directional mobile messaging applications for global health. Our research aims induce 1) to better understand how end users perceive and understand specific personalized health information, 2) design and build tools that allow domain experts to more efficiently and effectively communicate with end users, 3) explore the feasibility of integrating semi-automated messaging into the current standard of care. Specifically, we will iteratively design, build, and deploy a platform targeting maternal and neonatal health outcomes in peri-urban and rural Kenya. This work establishes a research agenda answering these questions with data from three trails utilizing the platform and the planned deployment of a fourth.

When: 9 Mar 2017 - 9:00am Where: CSE 503

Fahad Pervaiz, General Examination

Title: Understanding Data Cleaning Pipeline for Development Data

Advisor: Richard Anderson

Supervisory Committee: Richard Anderson (Chair), C Leigh Anderson (GSR, Evan's School), Kurtis Heimerl, and Abraham Flaxman (Global Health)

Abstract: The developing world is relying more and more on data driven policies. There are numerous development agencies that have pushed for on-ground data collection to support the development work they are pursuing. Many governments have launched their own efforts for more frequent information gathering. Overall, the amount of data collected is tremendous, yet we face significant issues in doing useful analysis. Most of these barriers are around data cleaning and merging, and they require a data engineer to support some parts of the analysis. In this thesis, we aim to understand the pain points of cleaning the development data and propose solutions that harness the thinking of a data engineer and reduce the manual workload of this tedious process. To achieve this goal, I am currently focused on two research directions: (1) understanding current data usage patterns and build a taxonomy of data cleaning in developing world; and (2) Building algorithms to support automated data cleaning, targeting selected problems in the space including matching transliterated names. The objective is to empower regular data users to easily do the necessary data cleaning and scrubbing for analysis.

When: 8 Mar 2017 - 12:00pm Where: CSE 305

Konstantin Weitz, Final Examination

Title: Formal Semantics and Scalable Verification for the Border Gateway Protocol using Proof Assistants and SMT Solvers

Advisors: Michael Ernst and Zach Tatlock

Supervisory Committee: Michael Ernst (co-Chair), Zach Tatlock (co-Chair), Silas Beane (GSR, Physics), and Arvind Krishnamurthy

Abstract: To reliably and securely route traffic across the Internet, Internet Service Providers (ISPs) must configure their Border Gateway Protocol (BGP) routers to implement policies restricting how routing information can be exchanged with other ISPs.  Correctly implementing these policies in low-level router configuration languages, with configuration code distributed across all of an ISP's routers, has proven challenging in practice, and misconfiguration has led to extended worldwide outages and traffic hijacks.

We present Bagpipe, the first system that enables ISPs to declaratively specify control-plane policies and that automatically verifies that router configurations implement such policies using an SMT solver.

We evaluated the expressiveness of Bagpipe's policy specification language on 10 configuration scenarios from the Juniper TechLibrary, and evaluated the efficiency of Bagpipe on three ASes with a total of over 240,000 lines of Cisco and Juniper BGP configuration. Bagpipe revealed 19 policy violations without issuing any false positives.

To ensure Bagpipe correctly checks configurations, we verified its implementation in Coq, which required developing both the first formal semantics for BGP based on RFC~4271; and SpaceSearch, a new framework for verifying solver-aided tools.

We provide evidence for the correctness and usefulness of our BGP semantics by verifying Bagpipe, formalizing Gao and Rexford's pen-and-paper proof on the stability of BGP (this proof required a necessary extension to the original proof), and performing random differential testing of C-BGP (a BGP simulator) revealing 2 bugs in C-BGP, but none in our BGP semantics.

We provide evidence for the general usefulness of SpaceSearch, by building and verifying two solver-aided tools. The first tool is Bagpipe, the second tool, SaltShaker, checks that RockSalt's x86 semantics for a given instruction agrees with STOKE's x86 semantics. SaltShaker identified 7 bugs in RockSalt and 1 bug in STOKE. After these systems were patched by their developers, SaltShaker verified the semantics' agreement on 15,255 instructions in under 2h.

When: 7 Mar 2017 - 12:00pm Where: CSE 503

John Toman, General Examination

Title: Learning to Adapt: Analyses for Configurable Software

Advisor: Dan Grossman

Supervisory Committee: Dan Grossman (Chair), Cecilia Aragon (GSR, HCDE), Zachary Tatlock, and Ras Bodik

Abstract: Configurations are found at all levels of the software stack: from kernel options to configuration files in MapReduce. However, ubiquitous configurations are not without consequences; these negative effects impact developers, end-users, and tool designers. Users must find and set the correct configuration options to achieve the behavior they want: failing to set options correctly can lead to unexpected behavior or crashes. Developers must also contend with large spaces of configurations and non-obvious interaction of features. Tool designers must contend with configurable applications, libraries and frameworks, where information about runtime behavior is spread across multiple artifacts including code annotations, configuration files, and more.

In this talk, I propose a dissertation that explores improving software quality by building analyses for configurable software. I motivate and propose two hypotheses that my dissertation will explore. I review related work in the area of my future dissertation work and briefly summarize my prior research in the area of configurable software.  Finally, I sketch my future dissertation work and work I plan to pursue after graduation.

When: 6 Mar 2017 - 2:30pm Where: CSE 403

Maxim Grechkin, General Examination

Title: Deep Curation: An Unsupervised Approach to Biological Data Curation

Advisor: Bill Howe

Supervisory Committee: Bill Howe (Chair), David Ribes (GSR, HCDE), Hoifung Poon (MSR), and Larry Ruzzo

Abstract: Public repositories such as Gene Expression Omnibus (GEO) have seen rapid accumulation of biological data. However, high-quality and consistent annotation is generally unavailable, which severely limits the use of multiple datasets for new discoveries, reproducibility, and other computational tasks. Previous attempts to automate the curation task require hand-labeled training data, which is not generally available and must be reacquired whenever the ontology that provides the class labels change.

We propose a new method that learns tissue type labels for GEO datasets with no training labels. We learn two classifiers in tandem, one over the free-text description for a dataset and another over the raw microarray signal itself, and having the two classifiers train each other iteratively. We applied this method to  GEO to produce an expression-based classifier that outperforms the state-of-the-art supervised-learning method in accuracy without any hand-labeled training data. 

When: 3 Mar 2017 - 12:00pm Where: CSE 403

Greg Nelson, Qualifying Project Presentation

Title: Learning To Read Code First: Evaluating a Novel Pedagogy for Learning Programming with an Interactive Visual Tutoring System

Advisors: Andy Ko (iSchool) and Steve Tanimoto

Abstract: What knowledge does learning programming require? Prior work has focused on theorizing and teaching writing and problem solving skills. We reconceptualize program reading as a separate skill from writing, and propose a novel formal theory of program tracing knowledge based on symbolic execution. Within program reading, we separate  program literary comprehension and program behavior comprehension (which includes program tracing), and provide a new explanation for the “computer is like a person” misconception among novices. We use this theory to argue for initially separating the learning of reading and writing, as well as carefully delineating them throughout learning programming. Based on our formal theory of program tracing knowledge, we propose pedagogical principles for teaching program tracing within a program reading curriculum. To assess learning within this new pedagogy, we build a new tutorial system - PLTutor - with novel technical capabilities that enable 1) curricular flexibility 2) assessments of reading knowledge at multiple levels of granularity 3) visualization of program--instruction--machine-state relationships with code token and sub-expression granularity. We evaluate learning gains among self-selected CS1 students using a block randomized controlled lab study comparing PLTutor with Codecademy(a writing tutorial). We cautiously interpret our results due to small sample size and assessment validity concerns. We find some evidence of improved learning gains on the SCS1, with average learning gains 66% higher than Codecademy (3.94 vs. 2.37 out of 27 questions).

When: 23 Feb 2017 - 11:15am Where: CSE 110

Kyle Thayer, Qualifying Project Presentation

Title: Barriers Faced by Coding Bootcamp Students

Advisors: Andy Ko (iSchool) and Katharina Reinecke

Abstract: Coding bootcamps are a new and understudied way of training new software developers. We interviewed twenty-five current or former coding bootcamp students to learn about the barriers coding bootcamp students face. We used the Communities of Practice framework to analyze barriers in both the students' relation to coding bootcamps and their relation to the software industry. We found that bootcamps offer an alternate path into the software industry, providing a second chance for those who missed programming earlier, particularly for women who had not considered programming a possibility or had been too intimidated to try. As in other computing education environments, bootcamp students faced issues around "fitting in" and divisions between those who "get it," and those who don't. Bootcamps came with a great personal cost to bootcamp students, often including significant time and resources used outside the bootcamp to get into the software industry, though they sometimes expressed uncertainty of what was needed to make this transition successfully.

When: 22 Feb 2017 - 10:30am Where: CSE 303

Ellis Michael, Qualifying Project Presentation

Title: Fault-Tolerant Distributed Protocols in Ordered Networks

Advisors: Dan Ports and Tom Anderson

Abstract: Protocols for fault-tolerant distributed systems typically assume a completely asynchronous network, in which messages can be dropped or reordered arbitrarily. Guaranteeing the safety of client-server based distributed storage systems in this context involves guaranteeing two distinct properties: order of state updates and their durability. This work shows that in single datacenters, separating these two concerns and guaranteeing ordering in the network using the capabilities of upcoming programmable switches can avoid most of the latency and throughput costs of replication and consistency. This work studies two related problems---state machine replication and distributed transaction processing---and presents two protocols---NOPaxos and Eris---which leverage different network ordering properties to deliver drastically improved normal case performance.

When: 21 Feb 2017 - 1:30pm Where: CSE 403

Maaz Ahmad, Qualifying Project Presentation

Title: Optimizing Large Scale Data Intensive Applications Using Verified Lifting

Advisors: Alvin Cheung and Ras Bodik

AbstractWe present a tool called Casper that enables sequential data-intensive programs to automatically leverage the optimizations provided by parallel data processing frameworks. Casper works by lifting sequential code fragments to a high-level domain-specific language that can be easily retargeted to parallel data processing frameworks based on the MapReduce programming model. We use a novel cost-based syntax-guided synthesis algorithm and program verification to find sound functional MapReduce representations of the input code. Our Casper prototype currently targets sequential code fragments written in Java and retargets them to Apache Spark. Our evaluation shows that Casper can translate 46 benchmarks from 7 different suites, with the generated code performing on average 13x faster compared to the original sequential implementation.

When: 17 Feb 2017 - 3:30pm Where: CSE 403

Aaron Walsman, Qualifying Project Presentation

Title: Tracking Articulated Objects with a High-Resolution Deformable Surface Model

Advisors: Dieter Fox and Brian Curless

Abstract: The last several years have seen significant progress in using depth cameras for tracking articulated objects such as human bodies, hands, or robotic manipulators.  Most approaches focus on tracking skeletal parameters of a fixed shape model, which makes them insufficient for robotics applications that require accurate estimates of object surfaces.  To overcome this limitation, we present a 3D model-based tracking system for articulated deformable objects.  Our system is able to track human body pose and high resolution surface contours in real time using a commodity depth sensor and GPU hardware.  We implement this as a joint optimization over a skeleton to account for changes in pose, and over the vertices of a high resolution mesh to track the subject's shape. With this model we are able to capture dynamic sub-centimeter surface detail such as face deformations and folds and wrinkles in clothing.  The end result is highly accurate spatiotemporal and semantic information which is well suited for physical human robot interaction.

When: 16 Feb 2017 - 10:00am Where: CSE 303

Sam Castle, Qualifying Project Presentation

Title: Addressing the Challenges of Fraud in Financial Services for Emerging Digital Economies

Advisors: Richard Anderson and Franziska Roesner

Abstract: According to the World Bank, more than 2 billion people worldwide have no access to any financial services. In the developing world, where traditional banking infrastructure is limited, 8 out of 10 people now own a mobile phone, which can be used to save, send, and borrow money. Where there is money, there must be security, yet prior work on mobile money identified discouraging vulnerabilities in the current ecosystem. We assess these security concerns from two directions: (1) the risk introduced by existing vulnerabilities and (2) the actual exploits which occur in practice. We begin by defining a systematic threat model for this domain and applying it throughout our work. To understand existing vulnerabilities, we conducted a large-scale analysis of 197 current Android deployments and examined 71 products in more detail. After detecting substantial potential vulnerabilities, we interviewed 7 software developers in Africa and South America to investigate the root causes of these issues. We found that competent developers consistently lack relevant security tools, and we propose a way forward for designing such tools. In parallel research we attempted to measure the prevalence of exploits occurring in practice. Based on anecdotal evidence from users, mobile operators, and news sources, social engineering is the most common attack on mobile money users. We collected a new corpus of data on phishing over SMS, and we present an initial evaluation of this phenomenon with proposals to mitigate the risk of fraud.

When: 14 Feb 2017 - 12:00pm Where: CSE 203

Sarah Elliott, Qualifying Project Presentation

Title: Using Learned Transition Models to Rearrange Dirt on a Surface

Advisors: Maya Cakmak and Dieter Fox

Abstract: We address the problem of enabling a robot manipulator to move arbitrary amounts and configurations of dirt on a surface to a goal region using a cleaning tool. We represent this problem as heuristic search with a set of primitive dirt-oriented tool actions. We present dirt and action representations that allow efficient learning and prediction of future dirt states, given the current dirt state and applied action. We also present a method for sampling promising tool actions based on a clustering of dirt states and heuristics for planning. We demonstrate the effectiveness of our approach on challenging cleaning tasks through an implementation on the PR2 robot.

When: 14 Feb 2017 - 11:00am Where: CSE 503

Liang Luo, Qualifying Project Presentation

Title: INA: Accelerating DNN Training via In-Network Aggregation

Advisors: Luis Ceze, Arvind Krishnamurthy

Abstract: Recent growth of DNN development outpaces the growth in network bandwidth and GPU raw power. This causes the experiment time for finding the right neural network model to explode from a few days on a single machine to several weeks even months on a cluster with thousands of machines. DNN Frameworks such as MXNET enabled training DNNs on cluster or datacenter level to bring down training time. However, even with aggressive optimizations built-in, we found deficiency when scaling to more machines and deeper neural networks which stops the system throughput from scaling linearly, due to frequent parameter updates in the physical network, limited physical network and parameter server capacity.

This talk characterizes a detailed DNN training of today, pinpoints the current bottleneck in large scale DNN training, proposes principles and designs of In-Network Aggregation (INA) for accelerating datacenter level DNN workloads by exploiting parallel parameter aggregation right inside the network, in a hierarchical manner. INA removes the bottleneck in parameter synchronization by reducing both required network bandwidth and latency for DNN training with techniques such as in-place parallel stream aggregation and merging. We also project the effectiveness of INA using an INA model, which sees a sizable improvement in training time for RESNET-269 when compared with current state-of-the-art. Our results show the feasibility of running much larger DNNs with INA on existing datacenter infrastructures than they currently can handle.

When: 13 Feb 2017 - 11:00am Where: CSE 203

Parmita Mehta, Qualifying Project Presentation

Title: Comparative Evaluation of Big-Data Systems on Scientific Image Analytics Workloads

Advisors: Magdalena Balazinska, Alvin Cheung
Abstract: Scientific discoveries are increasingly driven by analyzing large volumes of image data. Many new libraries and specialized database management systems (DBMSs) have emerged to support such tasks. It is unclear, however, how well these systems support real-world image analysis use cases, and how performant are the image analytics tasks implemented on top of such systems. In this project, we present the first comprehensive evaluation of large-scale image analysis systems using two real-world scientific image data processing use cases. We evaluate five representative systems (SciDB, Myria, Spark, Dask, and TensorFlow) and find that each of them has shortcomings that complicate implementation or hurt performance. Such shortcomings lead to new research opportunities in making large-scale image analysis both efficient and easy to use.
When: 13 Jan 2017 - 2:30pm Where: CSE 405 (DB Lab)

Eric Butler, General Examination

Title: Automatic Generation of Procedural Knowledge using Program Synthesis

Advisors: Zoran Popović (Co-Chair), Emina Torlak (Co-Chair), Andy Ko (GSR, iSchool), and Steve Tanimoto

Abstract: Educational technology that models the problem-solving process of a learning domain powers many promising and proven-effective applications such as generating problems, providing student feedback, or estimating student understanding. The creation of the formal domain models is an integral part of such applications and is a challenging and time-consuming process, requiring expertise in both the learning domain and artificial intelligence. To aid in this process, we can turn to computers to help automatically learn these domains. Unlike typical machine learning tasks, the primary goal of this endeavor is not to solve domain problems per se, but to get an interpretable description of how to solve problems. To support this goal, we turn towards program synthesis, using programs in a domain-specific language (DSL) to model procedural knowledge, and program synthesis to automatically generate these programs.

In this talk, I present the thesis that program synthesis can help automate modeling of procedural knowledge for learning domains. The goals of this work are to (1) understand common elements and approaches for building synthesis-aided systems for modeling various learning domains and (2) create a framework to aid in the implementation of these systems for new learning domains. I will discuss progress on building effective program synthesis systems to learn procedural knowledge in the domains of K12 algebra and logic puzzles. These systems center around 3 main components: automatically extracting specifications for potential rules from a model of the problem domain, using inductive program synthesis to automatically generate potentially useful problem-solving rules, and using optimization to select a subset of these rules as a domain model based on provided objective criteria. To aid in the iterative design of DSLs for these systems, I will propose a framework to provide analytic feedback to a DSL designer.

When: 8 Dec 2016 - 3:00pm Where: CSE 503

Daniel Gordon, Qualifying Project Presentation

Title: Re3: Real-Time Recurrent Regression Networks for Object Tracking

Advisors: Dieter Fox and Ali Farhadi

Abstract: Robust object tracking requires knowledge and understanding of the object being tracked: its appearance, its motion, how it changes over time. It also must be able to modify its underlying model and adapt to new observations. We present Re3, the first real-time deep object tracker capable of incorporating long-term temporal information into its model. In line with other recent deep learning techniques, we do not train an online tracker. Instead, we use a recurrent neural network to represent the appearance and motion of the object. We train the network to learn how object appearance and motion changes offline, letting it track with a single forward pass at test time. This lightweight model is capable of tracking objects at 150 FPS, while still attaining competitive results on challenging benchmarks. We also present experiments that directly target how occlusions affect tracker accuracy, and we show that our method handles occlusions better than other comparable trackers.

When: 6 Dec 2016 - 3:30pm Where: CSE 303

Dylan Hutchison, Qualifying Project Presentation

Title: In-Database Analytics for NoSQL Key-Value Stores
Advisors: Bill Howe, Dan Suciu
Abstract: Key-value databases emphasize record-level read-write operations, relying on external infrastructure such as Spark or MapReduce to implement complex analytics (e.g., matrix math, relational joins, machine learning). The "external system" approach is expensive for small yet complex analytics, requiring long code paths to extract data from the database and prepare native data structures. In response, developers implement custom applications that push simple filters and sums into the database's scans in order to maximize in-database processing. We generalize this approach to provide native, in-database support for complex analytics and evaluate its performance relative to external systems.
To begin, we implement the Graphulo API for sparse matrix processing inside the scan-time iterators of the Apache Accumulo NoSQL database. We compare the performance of running queries in Graphulo vs. off-the-shelf MapReduce and two in-memory linear algebra libraries on several matrix algorithms. We find that memory requirements and I/O differences between in-database and external system approaches are key factors that determine when one outperforms the others.
Next, we implement relational algebra in Accumulo with the same approach. More so than sparse matrices, relations have many mappings onto key-values; it is important to design a good mapping since the key-value "physical schema" controls how data are grouped and partitioned and therefore the performance of ingest and queries. We cataloged a number of key-value design patterns used in practice and generalize them into a grammar of access paths that insulate applications from the details of data layout. We prototyped a query compiler that generates iterator code based on access paths and show how the compiler was used in a cybersecurity application.
When: 5 Dec 2016 - 11:00am Where: CSE 403

Marco Tulio Correia Ribeiro, General Examination

Title: Model-Agnostic explanations for Machine Learning predictions

Advisor: Carlos Guestrin

Supervisory Committee: Carlos Guestrin (Chair), Jevin West (GSR, I School), Jeffrey Heer, and Sameer Singh (UC Irvine)

Abstract: Machine learning is at the core of many recent advances in science and technology. Unfortunately, the important role of humans is an oft-overlooked aspect in the field. As machine learning becomes a crucial component of an ever-growing number of user-facing applications, there are multiple reasons to focus on interpretable machine learning. A possible approach to interpretability in machine learning is to be model-agnostic, i.e. to extract post-hoc explanations by treating the original model as a black box. 

In this work, we present the thesis that model-agnostic interpretability, despite its challenges, is useful in the human-machine learning interaction in a variety of domains / tasks. The goals of this work are threefold: (1) to develop methods that produce model-agnostic explanations, (2) to investigate and evaluate how humans react to such explanations, and (3) to develop methods to incorporate human feedback into models, thereby “closing the loop”. Model-agnosticism inherently leads to some approximation, as the only way to have perfect faithfulness in explanation is by inspecting the black-box model itself. A major contribution of our work is considering and measuring how humans interact with model-agnostic explanations, and designing them in a way that optimizes such interaction.
When: 2 Dec 2016 - 3:00pm Where: CSE 128

Shengjie Wang, Qualifying Project Presentation

Title: Faster and More Accurate Graphical Model Identification of Tandem Mass Spectra using Trellises

Advisors: Jeff Bilmes and William Noble

Abstract: Tandem mass spectrometry (MS/MS) is the dominant high throughput technology for identifying and quantifying proteins in complex biological samples. Analysis of the tens of thousands of fragmentation spectra produced by an MS/MS experiment begins by assigning to each observed spectrum the peptide that is hypothesized to be responsible for generating the spectrum. This assignment is typically done by searching each spectrum against a database of peptides. To our knowledge, all existing MS/MS search engines compute scores individually between a given observed spectrum and each possible candidate peptide from the database. In this work, we use a trellis, a data structure capable of jointly representing a large set of candidate peptides, to avoid redundantly recomputing common sub-computations among different candidates. We show how trellises may be used to significantly speed up existing scoring algorithms, and we theoretically quantify the expe ted speed-up afforded by trellises. Furthermore, we demonstrate that compact trellis representations of whole sets of peptides enables efficient discriminative learning of a dynamic Bayesian network for spectrum identification, leading to greatly improved peptide identification accuracy.

When: 2 Dec 2016 - 9:00am Where: Foege Building S448

Makrand Sinha, General Examination

Title: New Lower Bounds in Complexity via Information Theory

Advisor: Anup Rao

Supervisory Committee: Anup Rao (Chair), Sreeram Kannan (GSR, EE), Paul W. Beame, and Dan Suciu

Abstract: In recent years, tools in information theory have been used extensively to solve some long-
standing open problems in complexity theory. In this proposal, we apply information-theoretic
tools to prove new lower bounds in different computational models:
1. We initiate the study of direct-sum questions in the model of streaming algorithms and prove
some initial results in this direction.
2. We introduce a new communication lower bound method which gives a simpler proof of the
recent result of Ganor, Kol and Raz (STOC '15) proving an exponential separation between
information complexity and communication complexity.
3. We propose to prove lower bounds on the size of approximate linear programming formulations
for some well-known optimization problems. In this direction, we discuss our efforts towards
proving a tight lower bound for approximate extended formulations for the matching polytope
by showing connections to lopsided unique disjointness.

When: 30 Nov 2016 - 8:00am Where: CSE 674

Jiechen Chen, Qualifying Project Presentation

Title: Information Revelation in a Competitive World

Advisors: Anna Karlin and Shayan Oveis Gharan

Abstract: Consider a company working to develop a new technology, or a mathematician seeking to prove a theorem. We consider the question of when such a utility maximizing agent (the company or the mathematician) should reveal partial progress. On the one hand, filing for a patent on a limited technology or publishing an intermediate result yields payoff to the agent. On the other hand, it allows the agent’s competitors to catch up and build on the agent’s progress to date, potentially getting in the way of the agent obtaining a far higher reward later. To study this question, we define a stylized model of competition between two agents, in which the strategic decision for each agent is when to reveal partial progress. For a special case, we characterize the set of Nash equilibrium strategies.

When: 29 Nov 2016 - 10:30am Where: CSE 203

Calvin Loncaric, General Examination

Title: Data Structure Synthesis

Advisor: Michael Ernst

Supervisory Committee: Michael Ernst (Chair), Jose Kutz (GSR, Applied Math), Emina Torlak, and Alvin Cheung

Abstract: Having the right data structures can greatly reduce---or entirely eliminate---the effort required to accomplish many programming tasks.  The simple data structures found in standard libraries can ease some programming tasks, but they fall woefully short when more complicated data structures are needed.  Our existing work has already shown that for some domains, the implementation of complex data structures can be enirely automated given a specification of the structure's desired behavior.  Such automation has enormous potential benefit to improve the correctness and efficiency of software while saving developer effort.  This talk will examine several previous approaches to the problem and related lines of research, summarizes our existing techniques and findings, and proposes future research directions.  In particular, we seek to generalize and extend our existing techniques to handle a much wider class of data structures and thoroughly evaluate the extent to which an automatic data structure synthesizer can affect the development of quality software.

When: 29 Nov 2016 - 9:30am Where: CSE 503

Sivaramakrishnan Natarajan Ramamoorthy, General Examination

Title: Communication Complexity and Data Structure Lower Bounds through an Information Theoretic Lens 

Advisor: Anup Rao

Supervisory Committee: Anup Rao (Chair), Rekha Thomas (GSR, Math), Paul Beame, and Dan Suciu

Abstract: A wide variety of diverse problems in 2-party communication complexity and data structures often have a common information theoretical underpinning to them. In this work, we exploit this further to study the problems of interactive compression of communication protocols and data structure lower bounds for finding the median of a set. 

The question of interactive compression asks if one can compress the communication of a protocol when the information revealed by the players is small. We prove improved bounds on the compression when the information revealed by the players is asymmetric. Asymmetry in the information often arises while simulating data structure algorithms with communication protocols, thus providing a motivation for this study. 

We also initiate the study of finding the median in the cell probe model. The task is to maintain a set of integers, add new elements and query the median. We prove lower bounds on the query time of finding the median when the add operations are non-adaptive. The proof uses a connection to the Sunflower Lemma from combinatorics. To this end, we propose to study the following problems: a) lower bounds for median with adaptive add operations b) threshold phenomenon in data structures for predecessor search.
When: 28 Nov 2016 - 4:00pm Where: CSE 303

Justin Huang, General Examination

Title: Code3: Software Development of Mobile Manipulator Robots for All Programmers

Advisor: Maya Cakmak

Supervisory Committee: Maya Cakmak (Chair), Andy Ko (iSchool), Dieter Fox, and Michael Ernst

Abstract: Programming mobile manipulator robots like the PR2 to perform useful, end-to-end tasks requires specialized knowledge of robot perception, manipulation, and programming frameworks. As a result, general programmers who have not invested the time to learn these skills are unable to program robots. This report explains our work on bridging the gap and enabling programmers to rapidly develop end-to-end robot programs such as fetching objects, clearing tables, or playing games without needing specialized robotics knowledge. Our system, Code3, abstracts the process of programming common perception and manipulation tasks, providing maximal expressivity with a minimal set of user-friendly features. We describe over a dozen examples of different tasks which were developed using our system. User studies we conducted of Code3 and its subcomponents show that it does enable general programmers to develop useful robot programs in much less time than it would take to learn previous robot programming frameworks such as ROS (Robot Operating System). The studies also revealed various areas of improvement for the system's usefulness and usability. This report lays out a proposal for future work to improve the system and enable the adoption of our framework in the service robotics sector.

When: 23 Nov 2016 - 11:30am Where: CSE 128

Ming Liu, Qualifying Project Presentation

Title: IncBricks: Enabling In-network Computation with a Programmable Network Middlebox

Advisors: Arvind Krishnamurthy and Luis Ceze

Abstract: The emergence of programmable network devices and the increasing data traffic of datacenters motivate our in-network computation idea. By offloading computing operations onto intermediate networking devices (e.g., switches, middleboxes), one can (1) serve network requests on the fly with low latency; (2) reduce datacenter traffic and mitigate network congestion; (3) save energy by running servers in a low-power mode. However, since (1) existing switch technology doesn't provide general computing capabilities, and (2) commodity datacenter networks are complex (e.g., hierarchical fat-tree topologies, multipath communication), enabling in-network computation inside a datacenter is challenging. In this paper, we present IncBricks, a hardware-software co-designed system that allows doing in-network computations on a programmable networking middlebox. As a Memcached accelerator, our prototype lowers GET latency by over 25% and doubles throughput for 64 byte values in a common cluster configuration. When doing computation on cached values, IncBricks provides 3 times more throughput and a third of the latency of client-side computation.

When: 22 Nov 2016 - 4:15pm Where: CSE 674

Xiaoyi Zhang, Qualifying Project Presentation

Title: Interaction Proxies for Runtime Repair and Enhancement of Mobile Application Accessibility

Advisors: James Fogarty, Anat Caspi and Jacob O. Wobbrock

Abstract: We introduce interaction proxies as a strategy for runtime repair and enhancement of the accessibility of mobile applications, allowing inaccessible mobile interfaces to be made accessible. Conceptually, interaction proxies are inserted between an application’s original interface and the manifest interface that a person uses to perceive and manipulate the application. This strategy allows third-party developers and researchers to modify an interaction without an application’s source code, without rooting the phone or otherwise modifying an application, while retaining all capabilities of the system (e.g., Android’s full implementation of the TalkBack screen reader). This paper introduces interaction proxies, defines a design space of interaction re-mappings, identifies necessary implementation abstractions, presents details of implementing those abstractions in Android, and demonstrates a set of Android implementations of interaction proxies from throughout our design space. We then present a set of interviews with blind and low-vision people interacting with our prototype interaction proxies, using these interviews to explore the seamlessness of interaction, the perceived usefulness and potential of interaction proxies, and visions of how such enhancements could gain broad usage. By allowing third-party developers and researchers to improve an interaction, interaction proxies offer a new approach to personalizing mobile application accessibility and a new approach to catalyzing development, deployment, and evaluation of mobile accessibility enhancements.

When: 18 Nov 2016 - 4:00pm Where: CSE 128

Oleksandr Polozov, General Examination

Title: PROSE: A Programming-by-Examples Framework for Industrial Data Wrangling and Beyond

Advisors: Sumit Gulwani (MSR) and Zoran Popovic

Supervisory Committee: Sumit Gulwani (Co-Chair), Zoran Popovic (Co-Chair), Gary Hsieh (GSR, HCDE), and Ras Bodik

Abstract: Inductive program synthesis or programming by examples (PBE) is the task of synthesizing a program in an underlying domain-specific language (DSL), given the user's intent as an under-specification (e.g., input-output examples of the program's behavior). Thanks to their ease of use, PBE-based technologies have gained a lot of prominence in the last five years, successfully automating computational tasks for end-users and repetitive routines for professionals. However, designing, developing, and maintaining an effective industrial-quality PBE technology used to be an intellectual and engineering challenge, requiring 1-2 man-years of Ph.D. effort.

In this work, we present PROSE (PROgram Synthesis using Examples), a universal PBE framework that vastly simplifies industrial applications of inductive program synthesis. The key idea behind PROSE is that many PBE technologies can be refactored as instantiations of one generic meta-algorithm for synthesis that is parameterized with declarative properties of the DSL operators. This meta-algorithm is based on backpropagation: it pushes the example-based spec downward through the DSL grammar, reducing the synthesis problem to smaller synthesis subproblems on the desired program's subexpressions. In addition to being more maintainable and accessible (compared to state-of-the-art complexly entangled PBE implementations), this approach is also more efficient since it does not need to filter out incorrect program candidates during the synthesis process.

We have built 10+ PBE technologies with PROSE in the domains of data wrangling, software refactoring, and programming education. Many of them have been deployed in Microsoft mass-market products, including Excel, PowerShell, Exchange, Azure Operational Management Suite, and Azure Data Factory. In this talk, I will also give an overview of the insights that we have gained from these deployments, including (a) challenges in resolving ambiguity in user intent, (b) HCI research in user interaction models, (c) approaches to making program synthesis accessible to software developers, and (d) importance of effortless parameterization with domain-specific insight. I will also present our future research plans on investigating novel domains, user studies in disambiguation models, and adaptive synthesis technologies.

When: 17 Nov 2016 - 12:00pm Where: CSE 403

Aditya Vashistha, General Examination

Title: Social Computing Platforms for the Next Billion People

Advisors: Richard Anderson (Chair), Julie Kientz (GSR, HCDE), James Fogarty, Kurtis Heimerl, and Katharina Reinecke

Abstract: Social media platforms have revolutionized how people communicate, consume and share information and have deeply impacted the lives of billions of people around the world. In addition to providing entertainment and information, they are playing a pivotal role in managing crisis response, political activism, and civil society movements like the Arab Spring. Recent years have also seen the rise of crowdsourcing marketplaces like Amazon Mechanical Turk and CrowdFlower that provide people with additional earning opportunities while advancing social science experiments and data processing. However, low-income, low-literate people in as-yet unconnected communities are often unable to reap the benefits of both social media and crowdsourcing platforms because they face a complex array of socioeconomic barriers, literacy constraints, and infrastructural challenges. The goal of my research is to design, build and evaluate voice-based social media platforms and crowdsourcing marketplaces for low-income, low-literate people in resource-constrained settings. To achieve this goal, I am currently focused on three main research directions: (1) creating social computing platforms that are accessible for people who use basic mobile phones without Internet connectivity; (2) creating social computing platforms for people using smartphones with intermittent Internet connectivity; and (3) understanding and systematizing the content creation, consumption, and sharing practices of the next billion people.

In this talk, I will describe my initial work in these three research directions. In particular, I will present 1) Sangeet Swara – a voice based social media platform for low-income, low-literate basic mobile phone users, 2) Respeak – a voice-based crowd-powered speech transcription platform to provide additional earning opportunities to low-income people who owns a smartphone, and 3) Projecting Health – a community-led video education intervention to improve maternal health for rural communities in India.

When: 14 Nov 2016 - 12:00pm Where: CSE 305

Niel Lebeck, Qualifying Project Presentation

Title: Programming Interfaces for Distributed Data Management

Advisors: Arvind Krishnamurthy and Hank Levy

Abstract: Modern distributed applications, from Google Docs to Words with Friends, have the fundamental goal of enabling users to interact with shared data. Client apps display shared data to users and accept user input, and a hierarchy of servers and other backend components takes care of executing operations on shared data and notifying clients of updates. Increasingly, the backend functionality of these applications is provided not by a traditional three-tier architecture but by a Backend-as-a-Service (BaaS), a black-box service extending from the clients to cloud storage. When a distributed application uses a BaaS, clients (rather than servers) are responsible for executing the application code that reads and writes shared data, presenting several challenges in designing the programming interfaces and underlying protocols of such a service. Existing services provide only weak guarantees for accessing shared data, failing to meet the needs of modern distributed applications. To address these challenges, we created Diamond, a data management system that provides end-to-end data management for distributed applications with strong guarantees. This talk focuses on programming interfaces, showing why the programming interfaces of existing systems are inadequate and how Diamond’s interface enables it to provide strong guarantees.

When: 10 Nov 2016 - 10:30am Where: CSE 110

Kendall Lowrey, Qualifying Project Presentation

Title: Real-time State Estimation with Whole-Body Multi-Contact Dynamics: A modified UKF Approach

Advisors: Emo Todorov and Dieter Fox

Abstract: We present a real-time state estimator applicable to whole-body dynamics in contact-rich behaviors. Our estimator is based on the Unscented Kalman Filter (UKF), with modifications that proved essential in the presence of strong contact non-linearities combined with unavoidable model errors. Instead of the standard UKF weighting scheme, we use uniform weighting which is less susceptible to samples that violate unilateral constraints. We also develop a rich model of process noise including noise in system parameters, control noise, as well as a novel form of timing noise which makes the estimator more robust. The method is applied to an enhanced Darwin robot in walking as well as pseudo-walking while lying on the floor. Estimation accuracy is quantified via leave-one-out cross-validation, as well as comparisons to ground-truth motion capture data which is not used by the estimator. A full update takes around 7 msec on a laptop processor. Furthermore we perform the computationally-expensive prediction step (involving 210 forward dynamics evaluations in the MuJoCo simulator) while waiting for sensor data, and then apply the correction step as soon as sensor data arrive. This correction only takes 0.5 msec. Thus the estimator adds minimal latency to closed-loop control, even though it handles the whole-body robot dynamics directly, without resorting to any of the modeling shortcuts used in the past.

When: 8 Nov 2016 - 2:30pm Where: CSE 674

Harley Montgomery, Qualifying Project Presentation

Title: Guided Policy Search via Approximate Mirror Descent

Advisors: Sergey Levine and Emo Todorov

Abstract: Guided policy search algorithms can be used to optimize complex nonlinear policies, such as deep neural networks, without directly computing policy gradients in the high-dimensional parameter space. Instead, these methods use supervised learning to train the policy to mimic a "teacher" algorithm, such as a trajectory optimizer or a trajectory-centric reinforcement learning method. Guided policy search methods provide asymptotic local convergence guarantees by construction, but it is not clear how much the policy improves within a small, finite number of iterations. We show that guided policy search algorithms can be interpreted as an approximate variant of mirror descent, where the projection onto the constraint manifold is not exact. We derive a new guided policy search algorithm that is simpler and provides appealing improvement and convergence guarantees in simplified convex and linear settings, and show that in the more general nonlinear setting, the error in the projection step can be bounded. We provide empirical results on several simulated robotic navigation and manipulation tasks that show that our method is stable and achieves similar or better performance when compared to prior guided policy search methods, with a simpler formulation and fewer hyperparameters.

When: 7 Nov 2016 - 3:30pm Where: CSE 503

Vikash Kumar, Final Examination


Title: Manipulators and Manipulation in High Dimensional Spaces

Advisor: Emo Todorov

Supervisory Committee: Emo Todorov (Chair), Mehran Mesbahi (GSR, A&A), Rajesh P.N. Rao, Dieter Fox, and Sergey Levine

Live stream: Youtube Live (Talk-slides for reference)

Abstract: Recent technological advancements are aggressively pushing the state of the art in robotics, especially in the field of legged locomotion. Not only robust agile quadrupedal locomotion, but dynamic bipedal locomotion in an unstructured environment have been demonstrated. Despite the similarity of the problem between legged locomotion and hand manipulation -- both involve planning trajectories for kinematic trees interacting with the environment by means of contacts -- these advancements have not yet had a comparable impact on dexterous dynamic manipulation. Robotic manipulation still involves simple kinesthetic movements with simple low degree-of-freedom manipulators. Most real world solutions involve custom manipulators with task specific hand-tuned solutions. Given that the design goal of robotic devices is to interact with and manipulate the world so as to solve real tasks, having one custom manipulator with carefully tuned solution per task does not scale. If robotic devices are to perform tasks of daily necessities in home environments or dangerous tasks in hostile environments, robust and universal manipulators capable of handling a variety of tasks are required.

This talk will present a collection of results collectively pushing the state of the art towards achieving dynamic dexterous manipulation. In particular, I'll present the "ADROIT manipulation platform", which is a fast, strong and compliant tendon-driven system capable of driving a ShadowHand skeleton surpassing human capabilities. I will also elaborate on a real-time behavior synthesis framework capable of generating dynamic dexterous manipulation in high dimensional spaces. The framework generalizes well across different tasks as it only needs high-level task specifications as inputs in order to synthesize the details of the manipulation. I will conclude with results on transferring these behaviors using machine learning techniques to the ADROIT manipulation platform.


When: 2 Nov 2016 - 12:00pm Where: CSE 305

Ravi Karkar, Qualifying Project Presentation

Title: TummyTrials: Using Self-Experimentation to Detect Individualized Food Triggers

Advisors: James Fogarty, Julie Kientz, and Sean Munson

Abstract: Diagnostic self-tracking, the recording of personal information to diagnose or manage a health condition, is a common practice, especially for people with chronic conditions. Unfortunately, many who attempt diagnostic self‑tracking have trouble accomplishing their goals. People often lack knowledge and skills needed to design and conduct scientifically rigorous experiments, and current tools provide little support. To address these shortcomings and explore opportunities for diagnostic self‑tracking, we designed, developed, and evaluated a mobile app that applies a self‑experimentation framework to support patients suffering from irritable bowel syndrome (IBS) in identifying their personal food triggers. TummyTrials aids a person in designing, executing, and analyzing self‑experiments to evaluate whether a specific food triggers their symptoms. We examined the feasibility of this approach in a field study with 15 IBS patients, finding that participants could use the tool to reliably undergo a self-experiment. However, we also discovered an underlying tension between scientific validity and the lived experience of self‑experimentation. We discuss challenges of applying clinical research methods in everyday life, motivating a need for the design of self‑experimentation systems to balance rigor with the uncertainties of everyday life.

When: 2 Nov 2016 - 10:30am Where: CSE 305

Bill Zorn, Qualifying Project Presentation

Title: Tool assisted development and validation of architectural timing models

Advisors: Dan Grossman and Luis Ceze

Abstract: We present a tool-assisted approach for development and validation of architectural timing models. Our tool assists development by allowing models to be specified as abstract sketches of timing behavior. For example, a timing model sketch might state that a move instruction takes k1 cycles while a jump takes k2. We use the Z3 SMT solver to either synthesize a concrete model from this sketch by finding k1 and k2, or provide a counterexample that demonstrates to the user why no such k1 and k2 can exist. Additionally, our tool assists validation of timing models for specific processor architectures by observing real, physical hardware. We use our approach to build a timing model for the TI MSP430 fr5969 microcontroller. The resulting model exposes many inaccuracies in the official TI user manual and in existing emulators, and we show that it is cycle-accurate for code from a variety of applications when compared to the behavior of physical hardware devices.

When: 26 Oct 2016 - 12:30pm Where: CSE 615 (Sampa lab)

Safiye Celik, General Examination

Title: High-dimensional machine learning techniques for integrative analysis of heterogeneous molecular data

Advisor: Su-In Lee

Supervisory Committee: Su-In Lee (Chair), Meliha Yetisgen (GSR, Biomedical Informatics), Paul Crane (Public Health), Raymond David Hawkins (Genome Sciences), and Ali Shojaie (Biostatistics, Public Health)

Abstract: Mining robust and relevant information from molecular data is essential to identify the mechanisms of complex biological phenotypes including disease mechanisms. However, there are three main challenges with the application of a standard machine learning approach to an individual molecular dataset. First, most molecular data is within the p >> n regime (i.e. the number of variables is much greater than the number of samples); for instance, there are about 20,000 genes in a human cell while at most a few hundreds of patient samples are available in individual gene expression datasets. This implies that models allowing for arbitrarily rich dependencies among variables (such as those used in deep learning methods) are highly likely to overfit the data. Second, there will be either technical or experimental confounders in any one study that make the features learned from an individual dataset not necessarily generalizable to other datasets. Finally, and most importantly, any biological or disease mechanism originates from the collaboration of several different types of genetic and epigenetic elements, and focusing on any single type of those elements will provide a poor understanding of the underlying complex biology.

In this talk, I will introduce novel machine learning techniques to address the aforementioned challenges. I will start with describing two methods we developed, MGL and MERGE, each of them learning different kinds of biologically interpretable features from a single high-dimensional gene expression dataset. Then I will introduce the INSPIRE probabilistic model that enables utilizing all samples from multiple datasets with different sets of measured variables, increasing the statistical power to detect generalizable and robust features. I will show that when used to combine datasets from ovarian cancer, INSPIRE reveals important molecular events underlying the disease. Finally, I will propose a network learning framework that builds on our previous works and enables integrating heterogeneous data of different molecular types from Alzheimer's disease and leukemia patients. We anticipate that the features learned by the proposed method will provide a better understanding of complex disease mechanisms which will lead to new therapeutic targets as well as better diagnosis and personalized treatment of patients.

When: 17 Oct 2016 - 1:00pm Where: CSE 303

Talia Ringer, Qualifying Project Presentation

Title: AUDACIOUS: User-Driven Access Control with Unmodified Operating Systems

Advisors: Dan Grossman and Franzi Roesner

Abstract: User-driven access control improves the coarse-grained access control of current operating systems (particularly in the mobile space) that provide only all-or-nothing access to a resource such as the camera or the current location. By granting appropriate permissions only in response to explicit user actions (for example, pressing a camera button), user-driven access control better aligns application actions with user expectations. Prior work on user-driven access control has relied in essential ways on operating system (OS) modifications to provide applications with uncompromisable access control gadgets, distinguished user interface (UI) elements that can grant access permissions.

This work presents a design, implementation, and evaluation of user-driven access control that works with no OS modifications, thus making deployability and incremental adoption of the model more feasible. We develop (1) a user-level trusted library for access control gadgets, (2) static analyses to prevent malicious creation of UI events, illegal flows of sensitive information, and circumvention of our library, and (3) dynamic analyses to ensure users are not tricked into granting permissions. In addition to providing the original user-driven access control guarantees, we use static information flow to limit where results derived from sensitive sources may flow in an application.

Our  implementation  targets  Android  applications.  We port open-source applications that need interesting resource permissions to use our system. We determine in what ways user-driven  access control in general and our implementation in particular are good matches for real applications. We demonstrate that our system is secure against a variety of attacks that malware on Android could otherwise mount.

When: 30 Sep 2016 - 2:30pm Where: CSE 203

Stuart Pernsteiner, General Examination

Title: Rapid Development of Mechanized Semantics

Advisors: Zach Tatlock and Michael Ernst

Supervisory Committee: Zach Tatlock (co-Chair), Michael Ernst (co-Chair), Jason Detwiler (GSR, Physics), and Sergey Levine

Abstract: Formal verification using modern proof assistants provides high assurance that a program behaves according to its specification. However, applying these techniques to a program requires suitable formal semantics for the programming language in use, and designing semantics for new languages is a lengthy process. We propose a new approach to the design of language semantics, using a simple intermediate language and differential testing to rapidly develop semantics that reflect the behavior of existing language implementations. We plan to evaluate this approach by applying it to the Clinical Neutron Therapy System, a complex medical installation, by designing semantics, verifying program properties, and developing verified language implementations for safety-critical components written in a variety of languages.

When: 21 Sep 2016 - 11:00am Where: CSE 503

Robert Gens, Final Examination

Title: Learning Robust Tractable Models for Vision

Advisor: Pedro Domingos

Supervisory Committee: Pedro Domingos (Chair), Marina Melia-Predoviciu (GSR, Stats), Ali Farhadi, Richard Newcombe (Surreal Vision), and John Platt (Google)

Abstract: Human vision is a demanding computation that acts on and learns from billions of moving measurements every second. Computer vision requires models that are both tractable for realtime learning and inference as well as robust to the transformations of the visual world.

For a vision system to benefit an embodied agent it must be able to (a) learn tractable models discriminatively so that it does not waste computation on nonessential questions, (b) learn model structure so computation is only added where needed, (c) learn from images subject to transformations, and (d) learn new concepts quickly.  In this dissertation we tackle these four desiderata, weaving together sum-product networks, neural networks, kernel machines, and symmetry group theory.

First, we extend sum-product networks so that they can be trained discriminatively.  This expands the space of SPN architectures and allows feature functions, making them a compelling tractable alternative to conditional random fields.  We show that discriminative SPNs can be competitive with deep models on image classification.

Second, we present an algorithm to learn the structure of sum-product networks.  The top-down recursive algorithm builds a product if it can decompose variables and otherwise a sum to cluster instances.  Surprisingly, this algorithm learns SPNs with superior inference accuracy and speed compared to probabilistic graphical models on a large number of datasets.

Third, we introduce deep symmetry networks that can learn representations over arbitrary Lie groups.  We present techniques to scale these networks to high-dimensional symmetries.  We show that deep symmetry networks can classify 2D and 3D transformed objects with higher accuracy and less training data than convolutional neural networks.

Finally, we propose compositional kernel machines as an instance-based learner that has the symmetry and compositionality of convolutional neural networks but is significantly easier to train.  We combat the curse of dimensionality by effectively summing over an exponential set of constructed virtual training instances using a sum-product function.  This makes CKMs outperform standard instance-based learners on image classification and generalizing to unseen compositions and symmetries.

When: 8 Sep 2016 - 3:00pm Where: CSE 128

Paul Vines, General Examination

Title: Assessing Capabilities of Advertising as an Intelligence Gathering Tool and Users’ Defenses

Advisors: Yoshi Kohno and Franzi Roesner

Supervisory Committee: Yoshi Kohno (co-Chair), Franzi Roesner (co-Chair), Emma Spiro (GSR, iSchool), and Zach Tatlock

Abstract: User tracking is pervasive on the commercial internet of today. There is a massive tracking and targeting infrastructure built into the commercial websites normal users visit every day; the purpose of this infrastructure is to facilitate targeted advertisements being served to these users based on ever- more precise criteria. However, this intended purpose is not the only end this infrastructure can be used for. Recent malvertisement campaigns - criminal groups using advertising services to host malware - are one example of this. In this dissertation I propose to study what appears to be an unexplored use of the tracking and targeting infrastructure: using targeted advertising to gather intelligence (ADINT). Like malvertising, this work seeks to leverage the existing infrastructure of advertising for alternative purposes, while remaining low- cost and convenient for individuals or small groups to utilize. In addition, I investigate the possible defenses users can employ to prevent tracking, and thus prevent these alternative uses of the infrastructure as well. 

When: 8 Sep 2016 - 2:00pm Where: CSE 503

Eric Mullen, General Examination

Title: Pushing the Limits of Compiler Verification

Advisors: Dan Grossman and Zach Tatlock

Supervisory Committee: Dan Grossman (co-Chair), Zach Tatlock (co-Chair). Geoffrey Paul Boers (GSR, Music), Xi Wang, and Paul Beame

Abstract: The CompCert formally verified compiler has been repeatedly shown to have a strong formal guarantee. However, there are still limits to compiler verification: both a semantic gap between the lowest level of the compiler and the hardware, and unverified extraction being trusted near the top. Here I describe how I am pushing the limits of compiler verification, by working to both close the low level semantic gap and remove our reliance on unverified extraction. 

When: 1 Sep 2016 - 9:00am Where: CSE 403

Brian Dolhansky, Qualifying Project Presentation

Title: Deep Submodular Functions

Advisors: Jeff Bilmes and Pedro Domingos

Abstract: We propose and study a new class of submodular functions called deep submodular functions (DSFs). We define DSFs and situate them within the broader context of classes of submodular functions in relationship both to various matroid ranks and sums of concave composed with modular functions (SCMs). Notably, we find that DSFs constitute a strictly broader class than SCMs, thus motivating their use, but that they do not comprise all submodular functions. Interestingly, some DSFs can be seen as special cases of certain deep neural networks (DNNs), hence the name. Finally, we show how to learn DSFs in a max-margin framework, and successfully apply this to both synthetic and real-world data instances.

When: 26 Aug 2016 - 2:00pm Where: CSE 203

Brandon Holt, Final Examination

Title: Type-Aware Programming Models for Distributed Applications

Advisors: Luis Ceze and Mark Oskin

Supervisory Committee: Luis Ceze (co-Chair), Mark Oskin (co-Chair), Eric Klavins (GSR, EE), and Dan Ports

Abstract: Modern applications are distributed: from the simplest interactive web applications to social networks with massive datacenters around the world. In order to scale services and handle turbulent internet traffic, developers of distributed applications constantly balance fundamental tradeoffs between parallelism and locality, replication and synchronization, consistency and availability. Many layers of abstraction allow the complex ecosystem of servers, databases, and caches to operate independently but prevent them from optimizing for the needs of individual applications. Developers need systems that can help make those tradeoffs, but systems need to know more about the application first.

With new programming models, we can use type systems and abstract data types to communicate application semantics to distributed systems. The techniques we explored include: a compiler analysis called Alembic that uses knowledge of remote data accesses to automatically move computation closer to data, distributed data structures that allow threads to cooperate rather than contend, a datastore called Claret that uses knowledge of ADT semantics to expose concurrency between transactions, and a type system and runtime, IPA, that helps trading off accuracy for performance while ensuring consistency safety. By exposing concurrency and efficiency optimizations, these techniques contribute to the toolkit available to distributed application developers, making their lives easier and their software more robust.
When: 12 Aug 2016 - 4:00pm Where: CSE 403

Jonathan Bragg, General Examination

Title: Self-improving Crowdsourcing

Advisors: Dan Weld and Mausam

Supervisory Committee: Dan Weld (co-Chair), Mausam (co-Chair), Andy Ko (GSR, iSchool), Dieter Fox, and Ece Kamar (MSR)

Abstract: Crowdsourcing enables data collection at scale, critical to accelerating scientific discovery and supporting a host of new machine learning applications. However, ensuring high quality data remains a key challenge, and one of central importance to downstream applications. Creating a successful crowdsourcing task requires significant time investment and iteration to ensure that workers understand the task and produce quality data. This cost underlies nearly every reported crowdsourcing success, yet it is seldom acknowledged and therefore often underestimated. The cost of this initial investment makes crowdsourcing impractical for all but the largest tasks; in many cases, it may actually be less costly for the task designer to simply perform the task herself.

In this talk, I will present my vision for a crowdsourcing system that reduces the burden of creating new tasks by using information gathered from the crowd, either explicitly or implicitly, to perform automatic task improvement. This system will seek to minimize interaction with the task designer ("requester"), whose sole interaction with the system is to provide gold answers and explanations. It will use machine learning and decision theory, along with information from the crowd, to automatically create effective instruction for workers, thereby reducing the up-front cost of creating successful crowdsourcing tasks. This work builds on our previous work on optimizing the amount of instruction by also considering the quality of instruction. Unlike unsuccessful attempts by other researchers to reduce requester involvement, this work will succeed by creating workflows that solve useful subgoals (e.g., finding a diverse and instructive set of questions) and focusing initially on improving worker performance on consensus tasks, a large class of crowdsourcing problems.

When: 11 Aug 2016 - 10:00am Where: CSE 303

Vincent Liu, Final Examination

Title: Improving the Reliability and Efficiency of Data Center Networks

Advisors: Tom Anderson and Arvind Krishnamurthy

Supervisory Committee: Tom Anderson (Chair), Raadha Poovendran (GSR, EE), Arvind Krishnamurthy, and Shyam Gollakota

Abstract: Cloud computing owes much of its recent rise to the infrastructure on which it is built. Data center networks in particular have helped to enable the efficient and robust utilization of tens to hundreds of thousands of co-located servers. As applications and data sets continue to grow rapidly, the challenge for data center networks is to keep pace—not just by providing enough bandwidth, but also by lowering costs, increasing flexibility, and maintaining reliability.

This dissertation proposes that a key component in providing all of these properties simultaneously lies in the structure of the network. My thesis is that small, practical changes there can have large, far-reaching consequences–providing more than just bandwidth and efficiency, but also influencing higher-level protocol design and granting properties like near-instantaneous failure recovery and incremental upgrades of server capacity.

I present two complementary and composable systems that explore the efficacy of this approach.

The first system, F10, is a co-design of the network topology and failover protocols that is the first to provide efficient, near-instantaneous, fine-grained, and localized recovery and rebalancing for common-case network failures. Our results show that following network link and switch failures, F10 has less than 1/7th the packet loss of current schemes.

The second system, Subways, explores an alternative method to add network capacity toservers. In particular, we investigate the various ways an operator can augment the network using multiple network links per server. Using a simulation-based methodology, we show that Subways offers substantial performance benefits for popular application workloads: up to a 3.1× speedup in MapReduce and a 2.5× throughput improvement in memcache for a fixed average request latency, relative to an equivalent-bandwidth network that differs only in its wiring.

When: 10 Aug 2016 - 2:30pm Where: CSE 503

Nao Hiranuma, Qualifying Project Presentation

Title: CloudControl: Leveraging many public ChIP-seq controlexperiments to better remove background noise.

Advisors: Su-In Lee and Larry Ruzzo

Abstract: Chromatin immunoprecipitation followed by high throughput sequencing (ChIP-seq) is a widely used method to determine the binding positions of various proteins on the genome in a population of cells. A typical ChIP-seq protocol involves two experiments: one designed to capture target ChIP-seq signals (`target' experiment) and the other to capture background noise signals (`control' experiment). A peak calling algorithm then examines the difference between the target experiment data and control data to determine where the protein of interest binds along the genome. Our approach, named CloudControl, aims to improve the accuracy of peak calling by combining multiple control experiments from a publicly available source such as ENCODE to better remove background noise signals. To combine existing control experiment data we perform regression against a target experiment treating binned genome positions as samples (up to 32 million) and different control experiments as features (up to 450 through the ENCODE project). The regression fit is then used to generate a new control ChIP-seq data, which we refer to as CloudControl data. We use the following three metrics to evaluate the CloudControl data: (i) the presence of the known motifs for the corresponding target protein near called peaks, (ii) reproducibility among pairs of biologically replicated ChIP-seq experiments, and (iii) protein-protein physical interactions inferred from called peaks. In all three metrics, CloudControl data show superior performance over standard control tracks. This suggests that CloudControl can improve ordinary control tracks in the standard ChIP-seq protocol.

When: 9 Aug 2016 - 4:00pm Where: CSE 303

Chloé Kiddon, Final Examination

Title: Learning to Interpret and Generate Instructional Recipies

Advisors: Yejin Choi and Luke Zettlemoyer

Supervisory Committee: Yejin Choi (co-Chair), Luke Zettlemoyer (co-Chair), Gina-Anne Levow (GSR, Linguistics), and Hannaneh Hajishirzi (EE)

Abstract: Enabling computers to interpret and generate instructional language has become increasingly important to our everyday lives: we ask our smartphones to set reminders and send messages; we rely on navigation systems to direct us to our destinations. We define instructional recipes as a special case of instructional language, where completion of the instructions results in a goal object. Some examples include cooking recipes, craft projects, and assembly instructions. Developing systems that automatically analyze and generate instructional recipes requires finding solutions to many semantic challenges, such as identifying implicit arguments (e.g., “Bake for 15 min.”) and learning physical attributes of entities (e.g., which ingredients are considered “dry”). Amassing this information has previously relied upon high-cost annotation efforts. We present a pair of models that can interpret and generate instructional recipes, respectively, and are trained on large corpora with minimal supervision — only identification of the goal (e.g., dish to make), list of materials (e.g., ingredients to use), and recipe text. Our interpretation model is a probabilistic model that (1) identifies the sequence of actions described by the text of an instructional recipe and (2) in which of those actions the provided materials (e.g., ingredients) and entities generated by previous actions (e.g., the mixture created by ``Combine flour and sugar") are used. Our generation model generates instructional recipes that create a specified goal (e.g., dish to make) using a set of materials (e.g., ingredients to use). This model uses a novel neural architecture, the neural checklist model, that enables the generation of globally coherent text. Experiments show that our models can successfully be trained to interpret and generate instructional recipes from this unannotated text, while at the same time learning interpretable domain-knowledge.

When: 4 Aug 2016 - 10:00am Where: CSE 303

Konstantin Weitz, General Examination

Title: Formally Verified Solver Aided Tools for the Border Gateway Protocol

Advisors: Michael Ernst and Zach Tatlock

Supervisory Committee: Michael Ernst (co-Chair), Zach Tatlock (co-Chair), Silas Beane (GSR, Physics), and Arvind Krishnamurthy

Abstract: To reliably and securely route traffic across the Internet, In- ternet Service Providers (ISPs) must configure their Border Gateway Protocol (BGP) routers to implement policies re- stricting how routing information can be exchanged with other ISPs. Correctly implementing these policies in low- level router configuration languages, with configuration code distributed across all of an ISP’s routers, has proven chal- lenging in practice, and misconfiguration has led to extended worldwide outages and traffic hijacks.

We present Bagpipe, the first system that enables ISPs to declaratively express control-plane policies and that auto- matically verifies that router configurations implement such policies using an SMT solver. To ensure Bagpipe correctly checks configurations, we verified its implementation in Coq, which required developing both the first formal semantics of the BGP specification RFC 4271, and also a new framework for solver-aided tool verification (SearchSpace).

Our semantics models all required features of the BGP specification modulo low-level details such as bit representa- tion of update messages and TCP. Three case studies provide evidence for the correctness and general usefulness of our BGP semantics. 1) We verified the soundness of Bagpipe. 2) We formalized and extended the seminal pen-and-paper proof by Gao & Rexford on the convergence of BGP, revealing nec- essary extensions to Gao & Rexford’s original assumptions. 3) We tested the popular BGP simulator C-BGP against our semantics, revealing a bug in C-BGP.

SearchSpace is a technique to verify, by means of a proof assistant, tools that use SMT solvers. Three case studies will provide evidence for the general usefulness of SearchSpace. 1) We constructed and verified Bagpipe. 2) We will verify an SMT x86 semantics against a Coq x86 semantics (remains to be done). 3) We will verify a SMT based tool that checks the correctness of SQL rewrite rules (remains to be done).

We evaluated the expressiveness of Bagpipe’s specification language on 10 configuration scenarios from the Juniper TechLibrary, and evaluated the efficiency of Bagpipe on three ASes with a total of over 240,000 lines of Cisco and Juniper BGP configuration. Bagpipe revealed 19 policy violations without issuing any false positives.

We compare our work to the related work on: Solver Aided Tools, the BGP specification RFC 4271, Gao & Rexford’s proof of BGP convergence, the Stable Paths Problem (SPP), tools to help avoid BGP misconfiguration, and Software Defined Networking (SDN).

When: 3 Aug 2016 - 2:30pm Where: CSE 203

Mayank Goel, Final Examination

Title: Extending the Capabilities of Smartphone Sensors for Applications in Interaction and Health

Advisors: Shwetak Patel and Gaetano Borriello

Supervisory Committee: Shwetak Patel (Chair), Sean Munson (GSR, HCDE), James Fogarty, and Maya Cakmak

Abstract: Computing devices continually evolve and in a relatively short time, have morphed from a room-sized machine to something much smaller. Today, a mobile computer can be kept in our pocket, or worn as a wristwatch or a glass. These devices are sufficiently powerful for majority of tasks a typical user performs on a computer. While these devices are slowly catching up with the desktops in raw computation power, they already have much richer sensing capabilities. The on-device sensors provide mobile devices with an unprecedented opportunity to not only provide richer interactions, but also enrich the quality of life.

Acknowledging that the mobile devices cannot be loaded with every possible sensor, I believe a big chunk of sensing responsibility will lie with the generic sensors on our devices, such as camera, microphone, motion sensors, touchscreen, etc. In this dissertation, I provide support for my thesis statement:

The capabilities of generic sensors on mobile devices can be extended to enable applications that traditionally require dedicated sensors. These multiple generic sensors can come together to enable robust and deployable user-facing sensing systems.

In this thesis, I discuss:

1. How to extend the capabilities of the on-device sensors,

2. Some of the challenges the developer and the user might face while using these generic sensors,

3. Discuss how some of the challenges can be countered by combining multiple generic sensors, and

4. Provide some direction how these sensors should evolve in future.

In terms of application area, there are many domains where the smartphone sensors can prove useful, but this thesis focuses on applications in two domains: interaction and health sensing.

When: 3 Aug 2016 - 10:00am Where: CSE 305

Cyrus Rashtchian, General Examination

Title: New Algorithmic Tools for Distributed Similarity Search

Advisor: Paul Beame

Supervisory Committee: Paul Beame (Chair), Jevin West (GSR, iSchool), Luis Ceze, Mark Oskin, and Magda Balazinska

Abstract: The task of finding similar vectors in a large dataset arises as a subroutine in many applcations, such as image seach and collaborative filtering.  I will describe some recently completed theoretical work that provides new upper and lower bounds on the communication required to report all close pairs in Hamming distance.  I will also propose two future projects, centering around the implementation of distributed algorithms for similarity search based on Hamming distance and Edit distance.  I will mention connections to edge-isoperimetry, distance correlations, DNA clustering, and metric embeddings.

When: 27 Jul 2016 - 11:00am Where: CSE 503

Adam Lerner, General Examination

Title: Reproducible and Longitudinal Measurements of Web Security and Tracking

Advisors: Yoshi Kohno and Franzi Roesner

Supervisory Committee: Yoshi Kohno (co-Chair), Franzi Roesner (co-Chair), David Mcdonald (GSR, HCDE), and Arvind Krishnamurthy

Abstract: Threats to the security and privacy of the web are threats to people, infrastructure, commerce, and privacy. Measuring the web lets us understand what threats exist, how dangerous they are, who they threaten, and what can be done to mitigate them. Unfortunately, the web changes constantly, and with it, so do those threats. Our science needs to reflect this change to keep us one step ahead of it. This means our measurements of web security and tracking are constantly shifting in their techniques and findings.

Rapidly changing measurements of a rapidly changing web have made it difficult to pursue the scientific goal of reproducible measurements. Changes leave us wondering why our reproductions failed. We may not know if it was because of mistakes, because the measured property is naturally variable, or because the web has changed. Longitudinal studies are also rare, perhaps because we can't predict the future well enough to know what will be relevant in ten or twenty years, to start measuring it now.

In this dissertation, I propose to develop and evaluate a set of techniques and tools to more easily reproduce web security and tracking measurements. These techniques will use web archives as a data source. Archives such as the Wayback Machine, a publicly available archive which preserves a large fraction of the web back to its earliest days, can provide us the ability to browse, crawl, and measure the web as it appeared in the past.

When: 26 Jul 2016 - 2:00pm Where: CSE 303