Break-out Topics and Talks

Wednesday, October 22, 2014


Please check back for updates.
Session I
11:15am - 12:20pm
SANE 1
CSE 305
Big Data Management
CSE 403
Graphics/vision
CSE 691
1:00 - 1:30pm Keynote Talk
Atrium  
Session II
1:30 - 2:35pm
Sensing in Health
CSE 305
SANE 2
CSE 403
Robotics 1
CSE 691
Session III
2:40 - 3:45pm
HCI
CSE 305
Mobile Sensing and Interaction
CSE 403
Robotics 2
CSE 691
Session IV
3:50 - 4:55pm
Feedback Session
CSE 305
Molecular programming and synthetic biology
CSE 403
Wireless and sensing innovations
CSE 691
5:00 - 7:00pm RECEPTION AND LAB TOURS (WITH POSTERS AND DEMOS)
various labs and locations around the building  
7:15 - 7:45pm Program: Madrona Prize, People's Choice Awards
Microsoft Atrium  


Please check back for updates. Last updated 3 November 2014.

Session I

  • Systems, Architecture, and Networking (SANE) 1 (CSE 305)

    • 11:15-11:20: Introduction and Overview, Dan Ports
    • 11:20-11:35: Practical Approximate Computing, Adrian Sampson (PDF slides)

      Computers exert time and energy to ensure that errors effectively never happen. But this perfect precision is wasted on many kinds of programs. For many graphics, vision, machine learning, and gaming applications, perfect precision is unnecessary or even unattainable. Approximate computing proposes to design more efficient systems that can make mistakes. We'll discuss some recent developments in approximate computing that are ready to use today: and FPGA-based neural-network acceleration engine, an approximate compiler based on LLVM, and a WiFi networking stack that trades off errors for bandwidth.

    • 11:35-11:50: Speeding up page load with SplitBrowser, Sophia Wang

      Web page loads are slow, due to the inefficiencies that are intrinsic in the page load process. Our study shows that three-fourths of the CSS is not used during a page load and that 15% of page load times are spent waiting for parsing-blocking objects to be loaded.

      To address these inefficiencies, SplitBrowser speeds up page load times by splitting up the page load process. By preloading in a cloud server with more compute power, SplitBrowser largely reduces the inefficiencies of page loads on the client. SplitBrowser is fast for displaying page contents, ensures that users are able to continue to interact with the page, and is compatible with caching, CDNs, and security features that enforce same-origin policies. Our evaluations show that SplitBrowser reduces page load times by more than half for both mobile phones and desktop machines while incurring modest overheads to data usage.

    • 11:50-12:05: Customizable and Extensible Deployment for Mobile/Cloud Applications, Irene Zhang (PDF slides)

      Modern applications face new challenges in managing today’s highly distributed and heterogeneous environment. For example, they must stitch together code that crosses smartphones, tablets, personal devices, and cloud services, connected by variable wide-area networks, such as WiFi and 4G. This paper describes Sapphire, a distributed programming platform that simplifies the programming of today’s mobile/cloud applications. Sapphire’s key design feature is its distributed runtime system, which supports a flexible and extensible deployment layer for solving complex distributed systems tasks, such as fault-tolerance, code-offloading, and caching. Rather than writing distributed systems code, programmers choose deployment managers that extend Sapphire’s kernel to meet their applications’ deployment requirements. In this way, each application runs on an underlying platform that is customized for its own distribution needs.

    • 12:05-12:20: User-Controlled Privacy: Enforcing Privacy Policies on Mobile/Cloud Applications, Adriana Szekeres

      Today's mobile devices sense, collect, and store enormous amounts of personal information, while our favorite applications let us share that information with family and friends. We trust these systems and applications with our sensitive data and expect them to maintain its privacy. As we have repeatedly witnessed, this trust is often violated due to bugs, confusing privacy controls, or an application’s desire to monetize personal data.

      This talk presents Agate, a trusted distributed runtime system that: (1) gives users the power to define privacy policies for their data, and (2) enforces those policies without needing to trust applications or their programmers. Agate combines aspects of access control and information flow control to ensure that applications executing across mobile platforms and cloud servers meet our privacy expectations. We designed and implemented an Agate prototype to run on Android systems for smartphones, tablets, and servers. Both empirical and measurement data demonstrate that Agate effectively supports distributed, social data-sharing applications while preventing the leakage of sensitive data at only a moderate performance cost compared to Android.

  • Big Data Management (CSE 403)

    • 11:15-11:20: Overview of the database group and eScience Institute, Magda Balazinska
    • 11:20-11:40: The Myria Big Data Management and Analytics Service, Magda Balazinska

      Myria is a cloud service developed and operated by the University of Washington database group and eScience Institute. The Myria design is driven by requirements from real users and complex workflows. In this talk, we will present the key features of the Myria service, its architecture, and some example applications that use it.

    • 11:40-12:00: How to make queries go fast and play nice with parallel languages, Brandon Myers (PDF slides)

      I'll present Radish, a query compiler that generates distributed programs. Recent efforts have shown that compiling queries to machine code for a single-core can remove iterator and control overhead for significant performance gains. So far, systems that generate distributed programs only compile plans for single processors and stitch them together with messaging.

      In this talk, I'll describe an approach for translating query plans into distributed programs by targeting the partitioned global address space (PGAS) parallel programming model as an intermediate representation. This approach affords a natural adaptation of pipelining techniques used in single- core query compilers and an overall simpler design. We adapt pipelined algorithms to PGAS languages, describe efficient data structures for PGAS query execution, and implement techniques for mitigating the overhead resulting from handling a multitude of fine-grained tasks.

      We evaluated Radish on graph benchmark and application workloads and find that it is 4× to 100× faster than Shark, a recent distributed query engine optimized for in-memory execution. Our work makes important first steps towards ensuring that query processing systems can benefit from future advances in parallel programming and co-mingle with state-of-the-art parallel programs.

    • 12:00-12:20: Managing Data License Agreements with DataLawyer, Prasang Upadhyaya

      Data has value and is increasingly being exchanged for commercial and research purposes. Data, however, is typically accompanied by terms of use, which limit how it can be used. To date, there are only a few, ad-hoc methods to enforce these terms. We propose DataLawyer, a new system to formally specify usage policies and check them automatically at query runtime in a relational database management system (DBMS). We develop an extensible model to specify policies compactly and precisely. We also implement a prototype that uses novel algorithms to efficiently evaluate policies that can cut policy checking overheads to only a few percent of the total query runtime.

  • Graphics/Vision (CSE 691)

    • 11:15-11:20: Introduction and Overview, Ali Farhadi
    • 11:20-11:40: Photo Uncrop from 3D Reconstruction, Qi Shan

      We address the problem of extending the field of view of a photo--an operation we call uncrop. Given a reference photograph to be uncropped, our approach selects, reprojects, and composites a subset of Internet imagery into a larger image around the reference using the underlying scene geometry. The proposed Markov Random Field based approach is capable of handling large Internet photo collections with arbitrary viewpoints, dramatic appearance variation, and complicated scene layout. We show results that are visually compelling on a wide range of real world landmarks.

    • 11:40-12:00: Incorporating Scene Context and Object Layout into Appearance Modeling, Hamid Izadinia (PDF slides)

      A scene category imposes tight distributions over the kind of objects that might appear in the scene, the appearance of those objects and their layout. In this paper, we propose a method to learn scene structures that can encode three main interlacing components of a scene: the scene category, the context-specific appearance of objects, and their layout. Our experimental evaluations show that our learned scene structures outperform state-of-the-art method of Deformable Part Models in detecting objects in a scene. Our scene structure provides a level of scene understanding that is amenable to deep visual inferences. The scene structures can also generate features that can later be used for scene categorization. Using these features, we also show promising results on scene categorization.

    • 12:00-12:20: Total Moving Face Reconstruction, Supasorn Suwajanakorn

      We present an approach that takes a single video of a person's face and reconstructs a high detail 3D shape for each video frame. We target videos taken under uncontrolled and uncalibrated imaging conditions, such as youtube videos of celebrities. In the heart of this work is a new dense 3D flow estimation method coupled with shape from shading. Unlike related works we do not assume availability of a blend shape model, nor require the person to participate in a training/capturing process. Instead we leverage the large amounts of photos that are available per individual in personal or internet photo collections. We show results for a variety of video sequences that include various lighting conditions, head poses, and facial expressions.

Session II

  • Sensing in Health (CSE 305)

  • 1:30-1:35: Introduction and Overview, Shwetak Patel
  • 1:35-1:55: BiliCam: Using Mobile Phones for Assessing Newborn Jaundice, Lilian de Greef

    Health sensing through smartphones has received considerable attention in recent years because of the devices' ubiquity and promise to lower the barrier for tracking medical conditions. We focus on using smartphones to monitor newborn jaundice, which manifests as a yellow discoloration of the skin. Although a degree of jaundice is common in healthy newborns, early detection of extreme jaundice is essential to prevent permanent brain damage or death. Current detection techniques, however, require clinical tests with blood samples or other specialized equipment. Consequently, newborns often depend on visual assessments of their skin color at home, which is known to be unreliable. To this end, we present BiliCam, a low-cost system that uses smartphone cameras to assess newborn jaundice. We evaluated BiliCam on 100 newborns, yielding a 0.85 rank order correlation with the gold standard blood test. We also discuss usability challenges and design solutions to make the system practical.

  • 1:55-2:15: WiBreathe: Using Wireless Signals for Health Monitoring, Ruth Ravichandran (PDF slides)

    Sensing respiration rate has many applications in monitoring various heath conditions, such as sleep apnea and chronic obstructive pulmonary disease. In this paper, we present WiBreathe, a wireless, high fidelity and non-invasive breathing monitor that leverages wireless signals at 2.4 GHz to estimate an individual’s respiration rate. Our work extends past approaches of using wireless signals for respiratory monitoring by using only a single transmitter-receiver pair at the same frequency range of commodity Wi-Fi signals to estimate the respiratory rate of an individual. This is done irrespective of whether they are in line of sight or not (e.g., through walls). Furthermore, we demonstrate the capability of WiBreathe in detecting multiple people and by extension, their respiration rates. We evaluate our approach in various natural environments and show that we can track breathing with the accuracy of 2.16 breaths per minute when compared to a clinical respiratory chest band.

  • 2:15-2:35: DOSE: Detecting User-driven Operating States of Electronic Devices Using a Single Sensing Point, Ke-Yu Chen (PDF slides)

    Electricity and appliance usage information can often reveal the nature of human activities in a home. For instance, sensing the use of vacuum cleaner, a microwave oven, and kitchen appliances can give insights into a person’s current activities. We introduce DOSE, a significant advancement for inferring user-driven operating states of electronic devices from a single sensing point in a home. When an electronic device is in operation, it generates time-varying Electromagnetic Interference (EMI) based upon its operating states (e.g., vacuuming on a rug vs. hardwood floor). This EMI noise is coupled to the powerline and can be picked up from a single sensing hardware attached to the wall outlet in a house. Unlike prior data-driven approaches, we employ domain knowledge of the device’s circuitry for semi-supervised model training to avoid tedious labeling process. We evaluated DOSE in a residential house for 2 months and found that operating states for 16 appliances could be estimated with an average accuracy of 93.8%. These fine-grained electrical characteristics affords rich feature sets of electrical events and have the potential to support various applications such as in-home activity inference, energy disaggregation and device failure detection.

  • Systems, Architecture, and Networking (SANE) 2 (CSE 403)

    • 1:30-1:35: Introduction and Overview, Luis Ceze
    • 1:35-1:55: Grappa: Latency-Tolerant Shared Memory for Modern Data-Center Applications, Jacob Nelson (PDF slides)

      Grappa is a modern take on software distributed shared memory (DSM) for in-memory data-intensive applications. Grappa enables users to program a cluster as if it were a single, large, non-uniform memory access (NUMA) machine. Performance scales up even for applications that have poor locality and input-dependent load distribution. Grappa addresses deficiencies of previous DSM systems by exploiting application parallelism, trading off latency for throughput.

      We evaluate Grappa by using it to build an in-memory map/reduce framework (10x faster than Spark); a vertex-centric framework inspired by GraphLab (1.33x faster than native GraphLab); and a relational query execution engine (12.5x faster than Shark). All these frameworks required only 60-690 lines of Grappa code.

    • 1:55-2:15: Tales of the Tail: Hardware, OS, and Application-level Sources of Tail, Naveen Kumar Sharma (PDF slides)

      Interactive services often have large-scale parallel implementations. To deliver fast responses, the median and tail latencies of a service’s components must be low. In this paper, we explore the hardware, OS, and application-level sources of poor tail latency in high throughput servers executing on multi-core machines.

      We model these network services as a queuing system in order to establish the best-achievable latency distribution. Using fine-grained measurements, we then explore why these servers exhibit significantly worse tail latencies than queuing models alone predict.

    • 2:15-2:35: Arrakis: The Operating System is the Control Plane, Simon Peter (PDF slides)

      Recent device hardware trends enable a new approach to the design of network server operating systems. In a traditional operating system, the kernel mediates access to device hardware by server applications, to enforce process isolation as well as network and disk security.We have designed and implemented a new operating system, Arrakis, that splits the traditional role of the kernel in two. Applications have direct access to virtualized I/O devices, allowing most I/O operations to skip the kernel entirely, while the kernel is re-engineered to provide network and disk protection without kernel mediation of every operation.We describe the hardware and software changes needed to take advantage of this new abstraction, and we illustrate its power by showing improvements of 2-5x in latency and 9x in throughput for a popular persistent NoSQL store relative to a well-tuned Linux implementation.

  • Robotics Session 1 (CSE 691)

    • 1:30-1:35: Introduction and Overview, Maya Cakmak (PDF slides)
    • 1:35-1:55: End-User Programming of General-Purpose Robots, Maya Cakmak (PDF slides)

      Robots that can assist humans in everyday tasks have the potential to improve our quality of life and bring independence to persons with disabilities. A key challenge in realizing such robots is to program them to meet the unique and changing needs of users and to robustly function in their unique environments. Previous research has had limited success by attempting to preprogram universal or adaptive capabilities, because it is extremely difficult to anticipate all possible scenarios. Instead, our goal is to develop robots that can be programmed by the end-users after they are deployed in their context of use. To that end our research seeks to apply techniques from the broad area of End-User Programming (EUP) to robotics. In this talk I present recent work from the Human-Centered Robotics Lab that employs techniques such as Programming by Demonstration, Program Visualization, and Visual Programming to intuitively program robots to perform useful tasks involving object manipulation and tool-use.

    • 1:55-2:15: Robot Programming by Demonstration with Crowdsourcing, Maxwell Forbes

      Programming by Demonstration (PbD) can allow end-users to teach robots new actions simply by demonstrating them. However, learning generalizable actions requires a large number of demonstrations that is unreasonable to expect from end-users. We explore the idea of using crowdsourcing to collect action demonstrations from the crowd. We propose a PbD framework in which the end-user provides an initial seed demonstration, and then the robot searches for scenarios in which the action will not work and requests the crowd to fix the action for these scenarios. We use instance-based learning with a simple yet powerful action representation that allows an intuitive visualization of the action. Crowd workers directly interact with these visualizations to fix them. We demonstrate the utility of our approach with a user study involving local crowd workers (N=31) and analyze the collected data and the impact of alternative design parameters.

    • 2:15-2:35: DART: Dense Articulated Real-Time Tracking, Tanner Schmidt (PDF slides)

      We have developed DART, a general framework for tracking articulated objects, such as human bodies, human hands, and robots, with RGB-D sensors. We took a generative model approach, where the model is an extension the recently-popular signed distance function representation to articulated objects. Articulated poses are estimated via gradient descent on an error function which combines a standard articulated ICP formulation with additional terms which penalize violation of apparent free space and model self-intersection. Importantly, all error terms are trivially parallelizable, and optimized on a GPU, allowing for real-time performance while tracking many degrees of freedom. The practical applicability of the fast and accurate tracking provided by DART has been demonstrated in a robotics application in which live estimates of robot hands and of a target object are used to plan and execute grasps.

Session III

  • Human Computer Interaction (CSE 305)

    • 2:40-2:45: Introduction and Overview, Jeffrey Heer
    • 2:45-3:05: Declarative Interaction Design for Data Visualization, Arvind Satyanarayan

      Declarative visualization grammars can accelerate development, facilitate retargeting across platforms, and allow language-level optimizations. However, existing declarative visualization languages are primarily concerned with visual encoding, and rely on imperative event handlers for interactive behaviors. In response, we introduce a model of declarative interaction design for data visualizations. Adopting methods from reactive programming, we model low-level events as composable data streams from which we form higher-level semantic signals. Signals feed predicates and scale inversions, which allow us to generalize interactive selections at the level of item geometry (pixels) into interactive queries over the data domain. Production rules then use these queries to manipulate the visualization’s appearance. To facilitate reuse and sharing, these constructs can be encapsulated as named interactors: standalone, purely declarative specifications of interaction techniques. We assess our model's feasibility and expressivity by instantiating it with extensions to the Vega visualization grammar. Through a diverse range of examples, we demonstrate coverage over an established taxonomy of visualization interaction techniques.

    • 3:05-3:25: Unlocking Interaction by Reverse Engineering Graphical Interfaces, Morgan Dixon

      Interface tools have enabled the desktop computer, mobile interfaces, and nearly every application used today. Unfortunately, these tools have also created a monopoly in human-computer interaction. This monopoly stifles the impact of computing because there is a long tail of needs that no single institution can address with general-purpose software. Computers should be able to help every type of person in every possible scenario.

      I envision a transformative democratization of every aspect of human-computer interaction, enabled by new tools that unlock interaction and allow anybody to modify any interface of any application. For example, a human computer interaction researcher developing a new interaction technique might evaluate it in several real-world applications (e.g., Adobe Photoshop, Apple iTunes, Microsoft Office). A practitioner or hobbyist who sees the researcher’s prototype might then add the technique to several of their favorite applications. Web communities might organize around causes, such as translating interfaces into new languages, improving the accessibility of applications, or updating interfaces to better support ink, gestures, speech, and other advanced interactions.

      Unlocking interaction is difficult because of the rigidity and fragmentation of existing tools. Developers who create new ideas generally find it difficult or impossible to add their ideas to existing applications. In addition, people generally use a wide variety of applications implemented with multiple underlying toolkits. Adding flexibility to any one application is therefore insufficient for techniques that need to work across an entire desktop. My talk explores new advances in Prefab, a system I built to overcome this rigidity and fragmentation.

    • 3:25-3:45: Gesture Script: Recognizing Gestures and their Structure using Rendering Scripts and Interactively Trained Parts, James Fogarty (PDF slides)

      Gesture-based interactions have become an essential part of the modern user interface. However, it remains challenging for developers to create gestures for their applications. This talk is based on a paper that studies unistroke gestures, an important category of gestures defined by their single-stroke trajectories. We present Gesture Script, a tool for creating unistroke gesture recognizers. Gesture Script enhances example-based learning with interactive declarative guidance through rendering scripts and interactively trained parts. The structural information from the rendering scripts allows Gesture Script to synthesize gesture variations and generate a more accurate recognizer that also automatically extracts gesture attributes needed by applications. The results of our study with developers show that Gesture Script preserves the threshold of familiar example-based gesture tools, while raising the ceiling of the recognizers created in such tools.

  • Mobile Sensing and Interaction (CSE 403)

    • 2:40-2:45: Introduction and Overview, Shwetak Patel
    • 2:45-3:05: SideSwipe: Using GSM Signals for Mobile Gesture Interaction, Chen Zhao

      Current smartphone inputs are limited to physical buttons, touchscreens, cameras or built-in sensors. These approaches either require a dedicated surface or line-of-sight for interaction. We introduce SideSwipe, a novel system that enables in-air gestures both above and around a mobile device. Our system leverages the actual GSM signal to detect hand gestures around the device. We developed an algorithm to convert the discrete and bursty GSM pulses to a continuous wave that can be used for gesture recognition. Specifically, when a user waves their hand near the phone, the hand movement disturbs the signal propagation between the phone’s transmitter and added receiving antennas. Our system captures this variation and uses it for gesture recognition. To evaluate our system, we conduct a study with 10 participants and present robust gesture recognition with an average accuracy of 87.2% across 14 hand gestures.

    • 3:05-3:25: SwitchBack: Improving Interaction with Mobile Devices, Alex Mariakakis (PDF slides)

      Smartphones and tablets are often used in dynamic environments that force users to break focus and attend to their surroundings, creating a form of “situational impairment.” Current mobile devices have no ability to sense when users divert or restore their attention, let alone provide support for resuming tasks. We therefore introduce SwitchBack, a system that allows mobile device users to resume tasks more efficiently. SwitchBack is built upon Focus and Saccade Tracking (FAST), which uses the front-facing camera to determine when the user is looking and how their eyes are moving across the screen. In a controlled study, we found that FAST can identify how many lines the user has read in a body of text within a mean absolute percent error of just 3.9%. We then tested SwitchBack in a dual focus-of-attention task, finding that SwitchBack improved average reading speed by 7.7% in the presence of distractions.

    • 3:25-3:45: Powering Wireless Sensors using Ambient Temperature Changes, Chen Zhao

      Power remains a challenge in the widespread deployment of long-lived wireless sensing systems, which has led researchers to consider power harvesting as a potential solution. In this paper, we present a thermal power harvester that utilizes naturally changing ambient temperature in the environment as the power source. In contrast to traditional thermoelectric power harvesters, our approach does not require a spatial temperature gradient; instead it relies on temperature fluctuations over time, enabling it to be used freestanding in any environment in which temperature changes throughout the day. By mechanically coupling linear motion harvesters with a temperature sensitive bellows, we show the capability of harvesting up to 21 mJ of energy per cycle of temperature variation within the range 5°C to 25°C. We also demonstrate the ability to power a sensor node, transmit sensor data wirelessly, and update a bistable E-ink display after as little as a 0.25 °C ambient temperature change.

  • Robotics Session 2 (CSE 691)

    • 2:40-2:45: Introduction and Overview, Maya Cakmak
    • 2:45-3:05: The Satellite Sensor Platform, a Robot's Good Friend, James Youngquist (PDF slides)

      Flying quadrotors are much more agile than ground-based mobile robots. We are exploring the use of agile camera-carrying aerial robots as “companions” for slower ground-based Personal Robots. Flying robots could be used to quickly scout around corners or look up stairs, saving time for the slower moving ground-based robot.

      We are also exploring the use of these aerial robotic cameras in the context of robotic manipulation. Flying cameras are less geometrically constrained than the sensors that are physically built in to the main robot. Thus the flying camera can collect views of the back side of the object, which the robot would typically be unable to see because of occlusion. We present a small and agile quadrotor, and describe our strategy for using it in the context of manipulation: it augments the main robot's point cloud by providing point cloud data from arbitrary viewpoints, to cast light on the shadow. These satellite sensors are controlled by the high-capability robot along trajectories to maximize information gain. The initial implementation relies on a structure from motion (SfM) reconstruction of scene geometry from an onboard video feed.

    • 3:05-3:25: Optical Pre-touch Sensing for Robotic Grasping, Di Guo & Patrick Lancaster

      Robotic grasping has been hindered by the inability of robots to perceive unstructured environments. Because these environments can be complex or dynamic, it is important to obtain additional and precise sensing information just before grasping. We expand upon the pretouch modality by introducing a transmissive optical sensor. It can unambiguously indicate the presence or absence of objects between the robot's fingers. A wide variety of items that other sensors fail to sense, such as extremely soft or shiny objects, can be detected by this sensor. The sensor is fully integrated into the fingertips of the PR2 robotic platform. Because the sensor is part of the robot's arm, it is always at a known location, and so the robot can use proprioception to determine the location of the material detected by the sensor. Several experiments are conducted to verify the sensor's utility in both environment perception and robotic grasping. It is shown that the perception information supplied by the sensor facilitates effective robotic grasping.

    • 3:25-3:45: Synthesis of contact-rich behaviors with optimal control, Emo Todorov

      Animals and machines interact with their environment mainly through physical contact. Yet the discontinuous nature of contact dynamics complicates planning and control, especially when combined with uncertainty. We have recently made progress in terms of optimizing complex movements that involve many contact events. These events do not need to be specified in advance, but instead are discovered by the optimizer fully automatically. Key to our success is the development of new models of contact dynamics, which help the optimizer avoid a combinatorial search over contact configurations. We can presently synthesize movements such as getting up from the floor, walking and running, manipulating objects. While most of this work is done in physically-realistic simulation, some of our results are already being applied to physical robots.

Session IV

  • Feedback session (CSE 305)

    • 4:05-5:00: Feedback/discussion: John Zahorjan

      We try to prepare our students for their lives after university. You see how they do. We'd like your feedback, as one source of input that helps us improve our program. For instance:

      • When you think of UW CSE students in general, what do they seem strong at and in what areas do you wish they had more experience?
      • When one of our students applies to you and you decide not to interview, or you do interview but decide not to make an offer, are there deficiencies that might be addressed by our program?
      • We recently redesigned the undergraduate curriculum to reduce the core requirements, giving students greater flexibility in choosing areas of emphasis for their studies and more time to go deeper on selected topics. Has this change been evident in the students you're seeing? If so, how is it working out in practice?
      • When you think about all recent graduates you see, are there ways that computer science and engineering education generally fails to meet current current needs? Do you see ways in which your needs are evolving to which universities might be slow to adapt?
      • How are UW CSE graduates you know doing 5 or 10 years post graduation?
  • Molecular programming and synthetic biology (CSE 403)

    • 3:50-3:55: Introduction and Overview, Georg Seelig
    • 3:55-4:15: Aquarium: Programmable Wetlab, Tileli Amimeur

      Researchers are making advances in the areas of molecular and synthetic biology towards applications in personalized medicine, cancer research, and renewable energy/industrial resources. In the coming years, we will begin to see new technologies and markets for this research steadily emerge. However despite this progress, biological research is severely hindered by a fundamental inability to communicate and reproduce laboratory research methods. For example, knowledge is often transferred via a master-apprentice style of learning and communication, significantly limiting the reach and impeding the vast potential of biological research.

      Aquarium is a web-based environment for specifying workflows and managing laboratory operations, in an easy-to-share, computer encoded, human and machine interpretable manner. It allows users to design their own libraries of coded methods in online repositories. Aquarium is a suite of tools designed for high throughput experimentation and reproducibility in the wetlab through formal representation of laboratory knowledge.

    • 4:15-4:35: Combining Synthetic Biology with Lessons from Big Data, Alex Rosenberg

      Understanding how gene expression is programmed into the DNA sequences in our genomes is a central objective in human genetics. While challenging, the task of unraveling a 3 billion base code is not completely unprecedented. Over the past decade, computer scientists working in natural language processing have made immense progress using algorithms that learn from enormous data sets. Inspired this success of "big data" in traditional machine learning areas, we have applied synthetic biology to generate massive datasets profiling the biological function of many different DNA sequences. As a proof of principle, we have measured the alternative RNA splicing patterns of over 250,000 fully synthetic sequences — an order of magnitude more than exists in the natural genome. From these data, we have built a predictive sequence model of alternative splicing that outperforms the state of the art algorithms.

    • 4:35-4:55: How a Single Bacterial Protein Could Transform Medicine and Biotechnology: Genome Engineering with CRISPR/Cas, Nicholas Bogard

      CRISPR/Cas is a bacterials immune system capable of recognizing and destroying invading viruses. The system relies on "programmable" molecular guide sequences that can recognize DNA molecules unique to the invading virus and activate a mechanism that results in the destruction of the viral DNA. Recently, this system has been adapted to work in mammalian cells where it is used as a highly specific mechanism for modifying genomic DNA. CrRISPR/Cas provides the most promising example yet of a reliable and programmable method for modifying DNA in human and animal cells and will likely revolutionize biotechnology and genomic medicine.

  • Wireless and Sensing Innovations (CSE 691)

    • 3:50-3:55: Introduction and Overview, Shyam Gollakota
    • 3:55-4:10: Zero-Power Wi-Fi connectivity, Bryce Kellogg

      In this talk, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity at no power! We build a hardware prototype and demonstrate the first communication link between battery-free devices and commodity Wi-Fi devices.We believe that this new capability can pave the way for the rapid deployment and adoption of battery-free devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.

    • 4:10-4:25: Transforming Wi-Fi into a Camera, Rajalakshmi Nandakumar

      Is it possible to leverage Wi-Fi signals to create images of objects and humans? Given the ubiquity of Wi-Fi signals, a positive answer would allow us to localize static humans even when they do not carry any wireless devices, thus enabling pervasive home sensing. It would also enable new applications such as inventory localization — objects such as carts can be tracked without either the need for tagging them with RF sources or the burden of installation and cost. In this talk, we will demonstrate the feasibility of transforming Wi-Fi into a camera.

    • 4:25-4:40: A wireless, battery-free camera, Saman Naderiparizi

      Power is a key problem for the Internet of Things. Without a solution to the power problem, the Internet of Things could turn out to be the Internet of Dead Batteries. It will not be possible to change the batteries on a trillion internet connected devices. Fortunately, the energy efficiency of electronics has improved by a factor of one trillion since the first electronic computers were built in the 1940s. In recent years, this has enabled us to build accelerometers and other simple sensors that are powered at long range by radio waves. In this talk, we present what we believe is the world’s first RF-powered camera, which can take one picture every few minutes. This update rate is compatible with many applications such as metering, inspection, structural health monitoring, and surveillance. We will describe the camera and potential applications.

    • 4:40-4:55: Wirelessly charged robots, Ben Waters

      For robots to be truly autonomous, they must be able to “feed" themselves. Mechanical docking stations for recharging are unreliable, and require maintenance---the contacts get dirty and fail. In response to requests from several different manufacturers of mobile robots, we have developed wireless charging systems that can reliably recharge mobile robots from tiny Roomba vacuum cleaners, to hotel service robots, to industrial warehouse automation robots. In this talk, we present our adaptive wireless power system designed to re-charge robots in the 10-1000W power range. We will also describe a startup company, WiBotic, that aims to commercialize this technology.