Research Day Agenda
Thursday, November 1, 2018
|10:00 - 10:30am||Registration and coffee/breakfast
Gates Commons (691)
|10:30 - 11:10am||Welcome and Overview by Ed Lazowska and Hank Levy + various faculty on research areas
Gates Commons (691)
11:15am - 12:20pm
|Systems, Architectures and Programming for Machine Learning
|12:25 - 1:30pm||Lunch + Keynote Talk: Computer Security and Privacy for Existing and Emerging Technologies, Franziska Roesner, Co-director, Security and Privacy Research Lab, Paul G. Allen School of Computer Science & Engineering
1:30 - 2:35pm
|CS meets Biotech
|UW Reality Lab
2:40 - 3:45pm
|Programming Languages and Software Engineering
3:50 - 4:55pm
|Deep Learning for Natural Language Processing
|5:00 - 7:00pm||Open House: Reception and Poster Session/Lab Tours
|7:15 - 7:45pm||Program: Madrona Prize, People's Choice Awards
- 11:15-11:20: Introduction and Overview, Jacob Schreiber
- 11:20-11:35: Local Feature Attributions as Building Blocks for Explainable Tree-Based Machine Learning, Scott Lundberg
Explaining why a specific prediction was made is a key challenge for modern complex machine learning systems, and has been the subject of much recent research. Yet surprisingly, despite the popularity of tree-based machine learning models, comparatively little attention has been paid to interpreting individual predictions from these models. Here we show that adapting recent model-agnostic local feature attribution methods to trees leads to significant computational and accuracy improvements. These improvements allow high-confidence local feature attributions to be computed across entire datasets, enabling several new higher level interpretation methods based on aggregations of many local explanations. These improvements also enable the tractable computation of a new extension of local feature attributions that measures local feature interaction effects. We apply these new methods to interpreting non-linear mortality risk effects in the general US population, to understanding the risk of progression in chronic kidney disease, and to understanding how the accuracy of a machine learning model deployed in a hospital can degrade over time. We find that in many situations tree-based models can be both more accurate than deep learning models, while simultaneously being more interpretable than linear models.
- 11:35-11:50: Can deep learning help us program biology?, Erin Wilson
Biologically-derived materials are prevalent in our everyday lives. However, often times the molecules that make up these materials are difficult, expensive, or environmentally harmful to extract from their original biological sources. An alternative way to source many natural products is to engineer microorganisms, such as baker’s yeast, into biological molecule factories. After integrating foreign plant genes into the yeast genome, the designed microbes can convert renewable sugar feedstocks into a wide range of valuable molecules like medicines, flavors, fragrances, silks, and biofuels. However optimizing microbes to efficiently produce these molecules is challenging: we do not yet fully understand the signals that govern gene expression. To more effectively engineer microorganisms, we need to accurately model how non-coding DNA sequences influence gene expression strength by testing millions of synthetic, randomized DNA sequences as candidate signaling sequences. Can neural network models help us decode these gene expression signals and better engineer microbes for natural molecule production?
- 11:50-12:05: Multi-scale Deep Tensor Factorization Learns a Latent Representation of the Human Epigenome, Jacob Schreiber
The human epigenome has been experimentally characterized by measurements of protein binding, chromatin acessibility, methylation, and histone modification in hundreds of cell types. Despite collection of these data being a focus of several large consortia, most assays have not been performed in most cell types, resulting in a huge compendium of data that is mostly sparse. To address this, we propose a deep neural network tensor factorization method, Avocado, that learns to impute epigenomic experiments that have not yet been performed. We first demonstrate that when applied to the Roadmap compendium, which contains 127 cell types and 24 assays of histone modification and chromatin accessibility, that the imputations are of high quality. We then extend this method across four individuals, demonstrating its utility in the setting of personalized medicine. Lastly, we extend this approach to a data set extract from the ENCODE compendium, which contains 400 cell types and 83 assays including transcription factor binding and gene expression, and find that the model is able to model and impute a diversity of signals well.
- 12:05-12:20: DeepProfile: Deep learning approach to extract embeddings from cancer expression data using an ensemble of variational autoencoders, Ayse Berceste Dincer
Expression profile, measurement of the expression of genes in a human cell, have been considered the most informative and popular data type for biological discovery and precision medicine. However, its inherent high-dimensional nature (i.e., number of genes >> number of samples) poses challenges caused by false positive findings and difficulty with biological interpretation. A natural question is whether we can leverage existing expression profiles to learn the robust expression pattern from them and extract biologically meaningful patterns in an unsupervised manner. Deep learning based unsupervised feature learning has showed great success in other fields yet molecular data's high dimensionality hampers the direct use of deep learning. DeepProfile attempts to solve this problem by using an ensemble of variational autoencoders. DeepProfile learns biologically informative cancer-specific latent embedding spaces using publicly available gene expression for 19 different cancer types. We show that DeepProfile embeddings improve the prediction performance for complex traits like cancer drug response compared to other dimensionality reduction methods. DeepProfile also provides interpretability of latent variables in terms of genes, functional gene groups, and survival characteristics.
- 11:15-11:19: Introduction and Overview, Luis Ceze
- 11:19-11:36: TVM: An Automated End-to-End Optimizing Compiler for Deep Learning, Tianqi Chen
There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort. In this talk, we introduce TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations.
- 11:36-11:48: Optimizing Deep Learning Workloads for a Fleet of Hardware Devices, Eddie Yan
Efficiently optimizing deep learning workloads for a large range of hardware devices poses scalability, robustness, and prioritization challenges. This talk will describe how we design a system to scale optimization performance with the number of hardware devices, how we tolerate hardware/software failures during the optimization process, and how we allocate optimization/compute time across hardware devices.
- 11:48-12:04: VTA: Bringing Customizable Hardware Acceleration to the TVM Stack, Thierry Moreau
We present VTA, a generic, open-source, customizable deep learning accelerator that together with the TVM compiler provides a complete blue-print for hardware-accelerated deep learning systems. VTA aims to facilitate cross-stack optimizations that involve changes to compilers, architectures, and even models to make deep learning systems more capable and efficient.
- 12:04-12:20: Relay: A differentiable intermediate representation for deep learning compilers, Jared Roesch
We present Relay, a high level, differentiable intermediate representation for TVM. As part of TVM's goal to provide an end-to-end deep learning compiler stack we contribute a high level, whole model representation of machine learning programs. We aim to bring whole-program optimization to machine learning allowing generic ML models to be aggressively specialized for a wide range of hardware platform, including FPGAs via a connection with VTA.
- 11:15-11:20: Introduction and Overview, Yoshi Kohno
- 11:20-11:40: Towards Security and Privacy for Multi-User Augmented Reality: Foundations with End Users, Kiron Lebeck
Immersive augmented reality (AR) technologies are becoming a reality. Prior works have identified security and privacy risks raised by these technologies, primarily considering individual users or AR devices. However, we make two key observations: (1) users will not always use AR in isolation, but also in ecosystems of other users, and (2) since immersive AR devices have only recently become available, the risks of AR have been largely hypothetical to date. To provide a foundation for understanding and addressing the security and privacy challenges of emerging AR technologies, grounded in the experiences of real users, we conduct a qualitative lab study with an immersive AR headset, the Microsoft HoloLens. We conduct our study in pairs -- 22 participants across 11 pairs -- wherein participants engage in paired and individual (but physically co-located) HoloLens activities.
Through semi-structured interviews, we explore participants' security, privacy, and other concerns, raising key findings. For example, we find that despite the HoloLens's limitations, participants were easily immersed, treating virtual objects as real (e.g., stepping around them for fear of tripping). We also uncover numerous security, privacy, and safety concerns unique to AR (e.g., deceptive virtual objects misleading users about the real world), and a need for access control among users to manage shared physical spaces and virtual content embedded in those spaces. Our findings give us the opportunity to identify broader lessons and key challenges to inform the design of emerging single- and multi-user AR technologies.
- 11:40-12:00: Computer Security and Privacy for Refugees in the United States, Lucy Simko
In this work, we consider the computer security and privacy practices and needs of recently resettled refugees in the United States. We ask: How do refugees use and rely on technology as they settle in the US? What computer security and privacy practices do they have, and what barriers do they face that may put them at risk? And how are their computer security mental models and practices shaped by the advice they receive? We study these questions through in-depth qualitative interviews with case managers and teachers who work with refugees at a local NGO, as well as through focus groups with refugees themselves. We find that refugees must rely heavily on technology (e.g., email) as they attempt to establish their lives and find jobs; that they also rely heavily on their case managers and teachers for help with those technologies; and that these pressures can push security practices into the background or make common security "best practices" infeasible. At the same time, we identify fundamental challenges to computer security and privacy for refugees, including barriers due to limited technical expertise, language skills, and cultural knowledge — for example, we find that scams as a threat are a new concept for many of the refugees we studied, and that many common security practices (e.g., password creation techniques and security questions) rely on US cultural knowledge. From these and other findings, we distill recommendations for the computer security community to better serve the computer security and privacy needs and constraints of refugees, a potentially vulnerable population that has not been previously studied in this context.
- 12:00-12:20: Who's In Control?: Interactions In Multi-User Smart Homes, Christine Geeng
Adoption of commercial smart home devices is rapidly increasing, allowing for in-situ research in people's homes. As these technologies are deployed in shared spaces, we seek to understand the interactions among multiple people and devices in a smart home. We conducted a mixed-methods study with 18 participants living in multi-user smart homes, combining semi-structured interviews and experience sampling. Our findings surface tensions and cooperation among users in several phases of smart device use in the home: device selection and installation, ordinary device use, when the smart home does not work as expected, and over longer term use. We observe an outsized role of the person who installs devices in terms of selecting, accessing, controlling, and fixing them; minimally voiced privacy concerns among co-occupants; and negotiations between parents and children. We make design recommendations for supporting long-term smart home use and non-expert household members.
- 1:30-1:35: Introduction and Overview, Jon Froehlich
- 1:35-1:50: Wireless Analytics for 3D Printed Objects, Vikram Iyer
We present the first wireless physical analytics system for 3D printed objects using commonly available conductive plastic filaments. Our design can enable various data capture and wireless physical analytics capabilities for 3D printed objects, without the need for electronics. To achieve this goal, we make three key contributions: (1) demonstrate room scale backscatter communication and sensing using conductive plastic filaments, (2) introduce the first backscatter designs that detect a variety of bi-directional motions and support linear and rotational movements, and (3) enable data capture and storage for later retrieval when outside the range of the wireless coverage, using a ratchet and gear system. We validate our approach by wirelessly detecting the opening and closing of a pill bottle, capturing the joint angles of a 3D printed e-NABLE prosthetic hand, and an insulin pen that can store information to track its use outside the range of a wireless receiver.
- 1:50-2:05: Fabricating High-Level Design Specifications with Low-Level Object Properties, Liang He
Personal fabrication has become an emerging area with the increasing development of computer-aided design (CAD) and digital manufacturing techniques (e.g., 3D printing). It has led to a revolutionary shift away from traditional manufacturing processes to modern fabrication applications. However, 3D-printed objects, in particular, are rigid and static, which loses the chance to produce a diverse range of applications and meet a broader set of design requirements. In this talk, we introduce a new research perspective that takes object properties into account and builds supporting design tool and hardware to meet a particular set of user design specifications. In addition, we quickly highlight two research projects-Ondulé and FuzzPrint-to demonstrate our initials along this research direction.
- 2:05-2:20: Interactiles: 3D Printed Tactile Interfaces on Phone to Enhance Mobile Accessibility, Xiaoyi Zhang
The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. Prior investigation of tactile solutions for large touchscreens also may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.
- 2:20-2:35: Computational Design for the Next Manufacturing Revolution, Adriana Schulz
Over the next few decades, we are going to transition to a new economy where highly complex, customizable products are manufactured on demand by flexible robotic systems. In many fields, this shift has already begun. 3D printers are revolutionizing production of metal parts in the aerospace, automotive, and medical industries. Whole-garment knitting machines allow automated production of complex apparel and shoes. Manufacturing electronics on flexible substrates makes it possible to build a whole new range of products for consumer electronics and medical diagnostics. Collaborative robots, such as Baxter from Rethink Robotics, allow flexible and automated assembly of complex objects. Overall, these new machines enable batch-one manufacturing of products that have unprecedented complexity.In my talk, I argue that the field of computational design is essential for the next revolution in manufacturing. To build increasingly functional, complex and integrated products, we need to create design tools that allow their users to efficiently explore high-dimensional design spaces by optimizing over a set of performance objectives that can be measured only by expensive computations. I will discuss how to overcome these challenges by 1) developing data-driven methods for efficient exploration of these large spaces and 2) performance-driven algorithms for automated design optimization based on high-level functional specifications. I will showcase how these two concepts are applied by developing new systems for designing robots, drones, and furniture. I will conclude my talk by discussing open problems and challenges for this emerging research field.
- 1:30-1:35: Introduction and Overview, Luis Ceze and Georg Seelig
- 1:35-1:50: Programmable DNA-based Pattern Formation, Sifang Chen
In recent years, synthetic DNA has emerged as a material of choice for molecular construction and information processing. Here, we describe the synthesis of a novel DNA-hydrogel hybrid material capable of self-directed and programmable 2D patterning at the centimeter length scale. Specifically, we made tunable stripe patterns by integrating DNA-based molecular circuits with hydrogel. By tuning the kinetics of the embedded DNA circuits, the hydrogels were programmed to display patterns with various prescribed geometries. We also developed corresponding in silico simulations to inform us of the circuit parameters required to achieve target patterns. To our knowledge, this demonstrates for the first time a rationally designed chemical system capable of macroscopic pattern formation with precisely encoded visual features and dynamics. We believe this research has useful implications for synthesizing novel environmentally responsive materials and hope this work shows the potential of chemical computing for building programmable matter.
- 1:50-2:05: Scalable molecular decoding with nanopore sensors, Jeff Nivala
A significant challenge in designing bio-nano hybrid systems that simultaneously harness the advantages of both biological and semiconductor-based compute technologies is in engineering interfaces that allow for real-time communication across disparate components. In-vitro and in-vivo molecular circuits (e.g. DNA computing and synthetic gene circuits) are limited in scalability because the number of unique circuit output reporters is limited to a handful of commonly used fluorescent molecules. This constrains the amount of information that can be output from mixed molecular populations, hindering the potential for scaling, system multiplexing, closed-loop operation, and cross-system debugging. To address this, we are developing a toolbox of new molecular parts that can be used to store and transmit information in the form of novel nanopore-addressable molecular barcodes using engineered synthetic DNA and proteins. This approach to molecular decoding, when combined with machine learning, enables real-time signal readout at the single-molecule level, with a large state space, and generates electrical signals that can be processed by semiconductor-based devices in an automatable fashion. These features have the potential to advance molecular computation and the design-build-test-learn cycle of synthetic gene circuits.
- 2:05-2:20: EMBARKER: A hierarchical Bayesian approach empowering big data with prior knowledge for expression marker discovery and its application to Alzheimer's disease, Safiye Celik
Identifying meaningful phenotypic associations of gene expression is a fundamental research problem; however, false positives are common due to the high-dimensionality of expression data. Combining samples from different datasets can reduce the dimensionality, yet this is challenging because different studies are rarely "synchronized". Moreover, results from most existing tools lack biological interpretability. To resolve these challenges, we propose a general computational framework, EMBARKER, which introduces two computational innovations: (1) incorporating pathway information to alleviate high-dimensionality and improve interpretability, and (2) an intuitive way of combining datasets to increase statistical power while accounting for data heterogeneity. We compare EMBARKER to 15 state-of-the-art approaches using 43 genome-wide gene expression datasets and 55 phenotypes in a variety of problems ranging from cancer to Alzheimer’s disease (AD). EMBARKER leads to a dramatical improvement in statistical robustness of the identified expression and pathway markers. Most notably, we apply EMBARKER to 1,742 human brain tissue samples from 9 brain regions in 3 AD studies, which is, to our knowledge, the largest expression meta-analysis for AD. We perform a validation of the identified markers in vivo in a transgenic Caenorhabditis elegans model, which suggests mild inhibition of Complex I as a promising pharmacological avenue toward treating AD. An implementation of EMBARKER can be found on the website associated with our study: suinlee.cs.washington.edu/projects/embarker.
- 2:20-2:35: Puddle: A High-Level Programming System for Microfluidics, Max Willsey
Domains from molecular systems to medical diagnostics rely on microfluidic devices for automation. This doesn't just make things faster; it's essential to minimizing human error and enabling new, more complex applications. Puddle borrows ideas from programming languages and computer systems to make microfluidic automation cheaper, more reliable, and easier to use.
This talk will introduce the domain of microfluidics and its many potential applications. We will discuss how the Puddle system addresses many challenges in the field, and we'll touch on how other computer science ideas could apply to the world of microfluidics.
- 1:30-1:35: Introduction and Overview, Brian Curless
- 1:35-1:50: Surface Light Field Fusion, Jeong Joon Park
We present an approach for interactively scanning highly reflective objects with a commodity RGBD sensor. In addition to shape, our approach models the surface light field, encoding scene appearance from all directions. By factoring the surface light field into view-independent and wavelength-independent components, we arrive at a representation that can be robustly estimated with IR-equipped commodity depth sensors, and achieves high quality results.
- 1:50-2:05: PhotoShape: Photorealistic Materials for Large-Scale Shape Collections, Keunhong Park
Existing online 3D shape repositories contain thousands of 3D models but lack photorealistic appearance. We present an approach to automatically assign high-quality, realistic appearance models to large scale 3D shape collections. The key idea is to jointly leverage three types of online data – shape collections, material collections, and photo collections, using the photos as reference to guide assignment of materials to shapes. By generating a large number of synthetic renderings, we train a convolutional neural network to classify materials in real photos, and employ 3D-2D alignment techniques to transfer materials to different parts of each shape model. Our system produces photorealistic, relightable, 3D shapes (PhotoShapes).
- 1:50-2:05: Photo Wake-Up: 3D Character Animation from a Single Photo, Chung-Yi Weng
We present a technique to animate a human subject in a single photo. The idea is to 'wake-up' a still photo and create a short 3D animation of the person (e.g., the character walks out of the photo towards the viewer, runs, sits, or jumps). The user may also interact with the photo to re-pose the person or change viewpoint. We illustrate the applicability of the method on large variety of photos ranging from posters and sports photos to graffiti and art. Finally, we demonstrate bringing a photo or painting to life in Augmented Reality, enabling a user to hang virtual artwork and have the central figure walk out of the frame into the real 3D world.
- 2:05-2:20: Watching Soccer in AR, Konstantinos Rematas
We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage.
- 2:40-2:45: Introduction and Overview, Zach Tatlock
- 2:45-3:05: Learning from program characteristics to guide testing effort, Rene Just
Is my test suite adequate? Where is it deficient? What should and shouldn't it test? To answer these questions, developers heavily rely on experience or an absolute threshold with respect to a coverage criterion (e.g., 80\% statement coverage). Such threshold-based quality gates are program independent, mostly arbitrary, and widely criticized because they do not provide detailed guidance for where to focus testing efforts: threshold-based quality gates presuppose that all testing goals are equally important but not all components of a software system are equally critical or error-prone.
In this talk, I will present a new mutation-based approach to guiding testing effort and identifying test deficiencies. First, I will show how program context, extracted from a program's abstract syntax tree (AST), can predict the utility of testing goals and how the predicted utility can serve as a stopping criterion. Second, I will show how the same context-based approach can capture developer expertise and identify test deficiencies based on anomalies rather than absolute thresholds. This approach guides developers to satisfy testing goals that are satisfied in a similar context elsewhere in the code base. Conversely, it guides developers to discount testing goals that are not satisfied in a similar context.
- 3:05-3:25:Scout: Mixed-Initiative Exploration of Design Variations through High-Level Design Constraints, Amanda Swearngin
Although the exploration of variations is a key part of interface design, current processes for creating variations are mostly manual. We present Scout, a system that helps designers explore many variations rapidly through mixed-initiative interaction with high-level constraints and design feedback. Past constraint-based layout systems use low-level spatial constraints and mostly produce only a single design. Scout advances upon these systems by introducing high-level constraints based on design concepts (e.g. emphasis). With Scout, we have formalized several high-level constraints into their corresponding low-level spatial constraints to enable rapidly generating many designs through constraint solving and program synthesis.
- 3:25-3:45: Concerto: A Framework for Combined Concrete and Abstract Interpretation, John Toman
Abstract interpretation promises sound but computable static summarization of program behavior. However, modern software engineering practices pose significant challenges to this vision, specifically the extensive use of frameworks and complex libraries. Frameworks heavily use reflection, metaprogramming, and multiple layers of abstraction, all of which confound even state-of-the-art abstract interpreters. To overcome the above difficulties, we present Concerto, a system for analyzing framework-based applications by soundly combining concrete and abstract interpretation. Concerto analyzes framework implementations using concrete interpretation, and application code using abstract interpretation. Concerto exploits this configuration information to precisely resolve reflection and other metaprogramming idioms during concrete execution. We have formalized combined interpretation and proved it sound for any abstract interpretation that satisfies a small set of preconditions. We have implemented an initial prototype of Concerto for a subset of Java, and found that our combined interpretation significantly improves analysis precision and performance.
- 2:40-2:45: Introduction and Overview, Arvind Krishnamurthy
- 2:45-3:00: Nickel: A Framework for Design and Verification of Information Flow Control Systems, Helgi Sigurbjarnarson
Nickel is a framework that helps developers design and verify information flow control systems by systematically eliminating covert channels inherent in the interface, which can be exploited to circumvent the enforcement of information flow policies. Nickel provides a formulation of noninterference amenable to automated verification, allowing developers to specify an intended policy of permitted information flows. It invokes the Z3 SMT solver to verify that both an interface specification and an implementation satisfy noninterference with respect to the policy; if verification fails, it generates counterexamples to illustrate covert channels that cause the violation.
Using Nickel, we have designed, implemented, and verified NiStar, the first OS kernel for decentralized information flow control that provides (1) a precise specification for its interface, (2) a formal proof that the interface specification is free of covert channels, and (3) a formal proof that the implementation preserves noninterference. We have also applied Nickel to verify isolation in a small OS kernel, NiKOS, and reproduce known covert channels in the ARINC 653 avionics standard. Our experience shows that Nickel is effective in identifying and ruling out covert channels, and that it can verify noninterference for systems with a low proof burden.
- 3:00-3:15: iPipe: A Framework for Building Datacenter Applications Using In-networking Processors, Ming Liu
The increasing disparity of data center link bandwidth and CPU computing power motivates the use of in-networking processors to co-execute parts of datacenter applications. By offloading computations onto a NIC-side processor, we can not only save endhost server CPU cores, but also achieve lower request latency. However, building applications with an in-networking processor brings three challenges: programmability, offloading constraints, and multi-tenancy support.
This work proposes iPipe, a framework for developing datacenter services that can take advantage of an in-networking processor on a programmable NIC. iPipe provides an actor programming model and exposes various APIs through the iPipe runtime. It enables efficient NIC hardware utilization and fair computational resource sharing via a lightweight actor scheduler, distributed shared objects, a cross PCIe messaging tier, a shim networking stack, and a dynamic resource manager. We build three datacenter applications (i.e., a real-time data analytics engine, a distributed transaction system, and a replicated key-value store) based on iPipe and prototype them using commodity programmable NICs (i.e., Cavium LiquidIO). Real system based evaluations show that when processing 10Gbps of application bandwidth, NIC-side offloading reduces the average number of beefy Intel cores (of three applications) from 2.2 to 0.4, along with up to 15.8us latency savings. We also demonstrate that iPipe is able to provide performance isolation with fair bandwidth allocation, and that it scales to multiple programmable NICs.
- 3:15-3:30: ADARES: Adaptive Resource Management for Virtual Machines, Ignacio Cano
Virtual execution environments allow for consolidation of multiple applications onto the same physical server, thereby enabling more efficient use of server resources. However, users often statically configure the resources of virtual machines through guesswork, resulting in either insufficient resource allocations that hinder VM performance, or excessive allocations that waste precious data center resources. In this talk, we first characterize real-world resource allocation and utilization of VMs through the analysis of an extensive dataset, consisting of more than 250K VMs from over 3.6K private enterprise clusters. Our large-scale analysis confirms that VMs are often misconfigured, either overprovisioned or underprovisioned, and that this problem is pervasive across a wide range of private clusters. We then propose ADARES, an adaptive system that dynamically adjusts VM resources using machine learning techniques. In particular, ADARES leverages the contextual bandits framework to effectively manage the adaptations. Our system exploits easily collectible data, at the cluster, node, and VM levels, to make more sensible allocation decisions, and uses transfer learning to safely explore the configurations space and speed up training. Our empirical evaluation shows that ADARES can significantly improve system utilization without sacrificing performance. For instance, when compared to threshold and prediction-based baselines, it achieves more predictable VM-level performance and also reduces the amount of virtual CPUs and memory provisioned by up to 35% and 60% respectively for synthetic workloads on real clusters.
- 3:30-3:45: Slim: OS Kernel Support for a Low-Overhead Container Overlay Network, Danyang Zhuo
Containers have become the de facto method for hosting large-scale distributed applications. Container overlay networks are essential to providing portability for containers, yet they impose significant overhead in terms of throughput, latency, and CPU utilization. The key problem is a reliance on packet transformation to implement network virtualization. As a result, each packet has to traverse the network stack twice in both the sender and the receiver's host OS kernel. We have designed and implemented Slim, a low-overhead container overlay network that implements network virtualization by manipulating connection-level metadata. Our solution maintains compatibility with today’s containerized applications. Evaluation results show that Slim improves the throughput of an in-memory key-value store by 66% while reducing the latency by 42%. Slim reduces the CPU utilization of the in-memory key-value store by 54%. Slim also reduces the CPU utilization of a web server by 28%-40%, a database server by 25%, and a stream processing framework by 11%.
- 2:40-2:45: Introduction and Overview, Kevin Jamieson
- 2:45-3:00: Deep generative models meet science, Samuel Ainsworth
Deep generative models have recently yielded encouraging results in producing subjectively realistic samples of complex data. Far less attention has been paid to making these generative models interpretable. In many scenarios, ranging from scientific applications to finance, the observed variables have a natural grouping. It is often of interest to understand systems of interaction amongst these groups, and latent factor models (LFMs) are an attractive approach. However, traditional LFMs are limited by assuming a linear correlation structure. We present an output interpretable VAE (oi-VAE) for grouped data that models complex, nonlinear latent-to-observed relationships. We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data. We further show that in these situations, the regularization inherent to oi-VAE can actually lead to improved generalization and learned generative processes.
- 3:00-3:15: The Illusion of Change: Learning from Large-scale Heterogenous and Sparse Data, Ramya Korlakai Vinayak
In recent years, societal-scale data has begun to play a prominent role in social and computational science research. An active strand in this literature studies the impact of important geopolitical events on the behavior of large populations --- to understand, for instance, how emergency events impact the diffusion of information, or how new policies change patterns of social interaction. Such research often draws critical inferences from observing how an exogenous event changes meaningful metrics like network degree or contact entropy. As we highlight in this talk, the standard methodologies for computing these inferences will suffer from systematic statistical bias when the event also changes the sparsity of the data.
- 3:15-3:30: Accelerating robot learning, Aravind Rajeswaran
How can we create artificial agents that have an embodiment, move, and act in the world like humans? This question is of immense interest to a number of fields including robotics, character animation in graphics, and computational neuroscience. Reinforcement learning and stochastic optimal control provide a generic formulation to study decision making and motor control. Direct use of reinforcement learning with deep neural networks have led to tremendous advances like mastering the ancient game of Go. However, this generality comes at the expense of efficiency. Vast amounts of data are required to successfully deploy these algorithms, and as a result their impact in robotics has been limited. I will be summarizing recent efforts from our research group to accelerate and stabilize robot learning. These acceleration schemes have enabled agents to quickly acquire a vast repertoire of skills such as humanoid locomotion and dexterous in-hand manipulation. This talk is based on joint work with Emanuel Todorov, Sham Kakade, Kendall Lowrey, Svet Kolev, and collaborators from UC Berkeley, Google Brain, and OpenAI.
- 3:30-3:45: Object-Level Reinforcement Learning , William Agnew
Reinforcement learners have become vastly more powerful by incorporating deep learning techniques, playing Atari, Mario, Go, and other games with superhuman skill. However, these learners require vast amounts of training data to become skilled. For example, to master Pong, state-of-the-art reinforcement learners require tens of millions of game frames, equivalent to months of play time at human speed. We create a learner with a minimal unsupervised perceptual system, capable of detecting and tracking objects, which allow for modelling of and planning in the environment. We find this learner is over 1000x more data efficient than DQN while achieving a better on Atari Breakout. In addition, this learner is interpretable and highly parallelizable, shifting the learning bottleneck from the amount of training data available to computations easily accelerated with GPUs.;
- 3:50-3:55: Introduction and Overview, Maya Cakmak
- 3:55-4:07: End-user robot programming,Maya Cakmak
Robots that can assist humans in everyday tasks have the potential to improve people’s quality of life and bring independence to persons with disabilities. A key challenge in realizing such robots is programming them to meet the unique and changing needs of users and to robustly function in their unique environments. Most research in robotics targets this challenge by attempting to develop universal or adaptive robotic capabilities. This approach has had limited success because it is extremely difficult to anticipate all possible scenarios and use-cases for general-purpose robots or collect massive amounts of data that represent each scenario and use-case. Instead, my research aims to develop robots that can be programmed in-context and by end-users after they are deployed, tailoring it for the specific environment and user preferences. To that end, my students and I have been developing new techniques and tools that allow intuitive and rapid programming of robots to do useful tasks. In this talk I will introduce some of these techniques and tools, demonstrate their capabilities, and discuss some of the challenges in making them work in the hands of potential users and deploy them in the real world.
- 4:07-4:19: Robot-Assisted Feeding: From Bite Acquisition to Bite Transfer, Tapo Bhattacharjee
Successful robotic assistive feeding depends on reliable bite acquisition and easy bite transfer. However, bite acquisition is challenging because it requires manipulation of deformable hard-to-model food items with various compliance, texture, sizes, and shapes, and thus a fixed manipulation strategy may not work. Bite transfer is not trivial because it constitutes a unique type of robot-human handover where the human needs to use the mouth which places a high burden on the robot to make the transfer easy. Also, the dynamics of bite transfer changes during group dining activity and inferring the correct time to transfer a bite is a challenge. This talk will focus on algorithms and technologies used to address these issues of bite acquisition, bite transfer, and bite timing. We first develop a taxonomy of food manipulation relevant to assistive feeding to organize the complex interplay between fork and food. Using insights from the taxonomy, we then develop an autonomous robotic system that leverages multiple sensing modalities to perceive food item properties. Finally, we use the autonomous robotic system to implement our algorithms that showcase food item dependent manipulation primitives to reliably acquire a variety of solid food items and easily feed people in a timely manner.
- 4:19-4:31: Combining Model-Based and Learning-Based Approaches for Robot Perception, Tanner Schmidt
Deep learning is a powerful technique for approximating functions form sets of input-output pairs, which often works even in cases where little is known about the true function being approximated. This is particularly useful for tasks such as image classification, as the "true" mapping from images to semantic categories is rather opaque. On the other hand, it is often the case that some domain-specific knowledge about the function being learned is available, which can be incorporated into network structures or supervisory signals to guide learning. In this talk, I'll discuss some of the prior and ongoing work in the Robotics and State Estimation lab in which explicit models are used to improve deep learning outcomes, including my own work on using SLAM as mechanism for self-supervision.
- 4:31-4:43: Effective Model Usage in Robot Control, Kendall Lowrey
Recent work in deep reinforcement learning has enabled many new developments in robotic control. These results, however, are often data inefficient and constrained to simulation. In this talk we discuss recent work that explores how to safely deploy a controller to hardware for dynamic tasks, and a new framework for continuous learning with directed exploration. Key to both is the effective use of a simulated dynamical model.
- 4:43-4:55: Deep Thinking Robots, Sanjiban Choudhury
Real-world robotics requires sequential decision-making in uncertain, unstructured environments without much scope for trial-and-error or restarts. Enabled by recent advances in computation, end-to-end data-driven systems that have achieved considerable success in simulated and somewhat controlled real-world settings. Yet, such systems, even in simulation, completely break down when faced with difficult, long-horizon tasks that are laced with dead-ends. This is because such systems produce reactive strategies which do not harness those same computation powers of machines to explicitly simulate and reason about potential future outcomes. In this talk, I will first emphasize the importance of model-based search while contending with difficult, long-horizon robotic tasks. Thereafter, I will motivate how we can bring back data-driven techniques to enhance the capabilities of model-based search in varied domains such as robot navigation and manipulation.
- 3:50-3:55: Introduction and Overview, Jan Buys
- 3:55-4:10: Using Characters to Improve Neural Text Generation for Stories, Elizabeth Clark
Standard neural methods for generating text often fail to produce content that is relevant and coherent with the preceding text. This can be especially problematic for text generation in settings with long contexts, such as chatbots and story generation. In this talk, I describe a method for generating text for stories that uses the characters and other entities mentioned in the story so far. I will discuss the ways in which including this entity information as additional context improves the generated text.
- 4:10-4:25: ATOMIC: An Atlas of Machine Commonsense for If-Then reasoning, Maarten Sap
AI systems can achieve super-human performance on certain tasks, but they typically cannot learn causal knowledge, only complex correlational patterns. We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 300k textual descriptions. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on inferential knowledge organized as typed if-then relations with variables (e.g., "if X pays Y a compliment, then Y will likely return the compliment"). We propose nine if-then relation types to distinguish causes v.s. effects, agents v.s. themes, voluntary v.s. involuntary events, and actions v.s. mental states. By generatively training on the rich inferential knowledge described in ATOMIC, we show that neural models can acquire simple commonsense capabilities and reason about previously unseen events. Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
- 4:25-4:40: Pyramidal Recurrent Unit for Language Modeling, Rik Koncel-Kedziorski
LSTMs are powerful tools for modeling contextual information, as evidenced by their success at the task of language modeling. However, modeling contexts in very high dimensional space can lead to poor generalizability. We introduce the Pyramidal Recurrent Unit (PRU), which enables learning representations in high dimensional space with more generalization power and fewer parameters. PRUs replace the linear transformation in LSTMs with more sophisticated interactions including pyramidal and grouped linear transformations. This architecture gives strong results on word-level language modeling while reducing the number of parameters significantly. In particular, PRU improves the perplexity of a recent state-of-the-art language model Merity et al. (2018) by up to 1.3 points while learning 15- 20% fewer parameters. For similar number of model parameters, PRU outperforms all previous RNN models that exploit different gating mechanisms and transformations. We provide a detailed examination of the PRU and its behavior on the language modeling tasks.
- 4:40-4:55: Learning to Write with Cooperative Discriminators, Jan Buys
Text generated by Recurrent Neural Networks (RNNs) is often generic, incoherent, repetitive or even self-contradictory. We propose a framework that addresses these issues by composing a committee of discriminators that can guide a base RNN language generator towards more globally coherent generations. Concretely, each discriminator specializes in a different principle of communication, such as those encoded by Grice’s maxims. The discriminators are combined with the base generator through a composite decoding objective. Human evaluation demonstrates that the text generated by our model is preferred over that of strong baselines by a large margin, significantly enhancing the overall coherence, style and informativeness of the generations.
- 3:50-3:55: Introduction and Overview, Alex Mariakakis
- 3:55-4:10: Drunk User Interfaces: Determining Blood Alcohol Level through Everyday Smartphone Tasks, Alex Mariakakis
Breathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. This talk will be about drunk user interfaces: smartphone interfaces that measure the extent to which alcohol affects a person's motor coordination and cognition using performance metrics and sensor data. We examined five drunk user interfaces and combine them to form the DUI app. DUI uses machine learning models trained on performance metrics and sensor data to estimate a person's blood alcohol level (BAL).
- 4:10-4:25: Seismo: Blood Pressure Monitoring using Built-in Smartphone Accelerometer and Camera, Parker Ruth
Although cost-effective at-home blood pressure monitors are available, a complementary mobile solution can ease the burden of measuring BP at critical points throughout the day. Seismo is a smartphone-based BP monitoring application called Seismo. The technique relies on measuring the time between the opening of the aortic valve and the pulse later reaching a periphery arterial site. It uses the smartphone’s accelerometer to measure the vibration caused by the heart valve movements and the smartphone’s camera to measure the pulse at the fingertip. The system was evaluated in a nine participant longitudinal BP perturbation study. Each participant participated in four sessions that involved stationary biking at multiple intensities. Recent work has implemented the entire pulse transit time calculation in real-time on the smartphone.
- 4:25-4:40: CASPER: Capacitive Serendipitous Power Transfer for Through-Body Charging of Multiple Wearable Devices, Manuja Sharma
CASPER is a charging solution to enable a future of wearable devices that are much more distributed on the body. Instead of charging every device we want to adorn our bodies with, may it be distributed health sensors or digital jewelry, instead everyday objects such as beds, seats, and frequently worn clothing to provide can be augmented as convenient charging base stations. Our system works by treating the human body as a conductor and capacitively charging devices worn on the body whenever a well coupled electrical path is created during natural use of everyday objects. We performed an extensive parameter characterization for through-body power transfer and present a design trade-off visualization to aid designers looking to integrate our system. We utilized this design process in the development of a smart bandage device and a LED adorned temporary tattoo that charges at hundreds of micro-watts using our system.
- 4:40-4:55: CapHarvester: A Stick-on Capacitive Energy Harvester Using Stray Electric Field from AC Power Lines, Farshid Salemi Parizi
Internet of Things (IoT) applications and platforms are becoming increasingly prevalent. Alongside this growth of smart devices comes added costs for deployment, maintenance, and the need to manage power consumption so as to reduce recurrent costs of replacing batteries. To alleviate recurrent battery replacement and maintenance, we propose a novel battery-free, stick-on capacitive energy harvester that harvests the stray electric field generated around AC power lines (110 V/230 V) without an ohmic connection to earth ground reference, thereby obviating the need for cumbersome scraping of paint on concrete walls or digging a earth ground plate. Furthermore, our harvester does not require any appliance or load to be operating on the power line and can continuously harvest power after deployment. In effect, end-users are expected to simply stick the proposed harvester onto any existing power-line cord in order to power a sensing platform. Our controlled lab measurements and real-world deployments demonstrate that our device can harvest 270.6 µJ of energy from a 14 cm long interface in 12 min. We also demonstrate several applications, such as distributed temperature monitoring, appliance state monitoring, and environmental parameter logging for indoor farming.