Break-out Topics and Talks

Break-out Topics and Talks

Wednesday, October 23, 2013

Session I
11:15am - 12:20pm
Big Data Analytics as a Service
CSE 305
Programming Languages & Software Engineering Session
CSE 403
Reconstructing and Exploring the World, Inside and Out
CSE 691
1:00 - 1:30pm Keynote Talk: Interactive Data Analysis
Atrium  
Session II
1:30 - 2:35pm
Systems and Networking 1
CSE 305
Wireless and Ubicomp Session #1
CSE 403
Center for Game Science Session
CSE 691
Session III
2:40 - 3:45pm
Systems and Networking 2
CSE 305
Security Session
CSE 403
Interactions with/on Mobile Devices
CSE 691
Session IV
3:50 - 4:55pm
Feedback Session
CSE 305
HCI @ UW
CSE 403
Wireless and Ubicomp Session #2
CSE 691
5:00 - 7:00pm RECEPTION AND LAB TOURS (WITH POSTERS AND DEMOS)
various labs and locations around the building  
7:15 - 7:45pm Program: Madrona Prize, People's Choice Awards
Microsoft Atrium  


Please check back for updates. Last updated 29 October 2013.

 


 

Link to

Session I

  • Big Data Analytics as a Service (CSE 305)

    • 11:15-11:20: Introduction and Overview, Dan Halperin
    • 11:20-11:40: Communication Steps for Parallel Query Processing, Paris Koutris (PDF slides)

      We consider the problem of computing a relational query on a large database of size using a large number of servers. The computation is performed in rounds, and each server can receive only a limited amount of data that is controlled by a replication parameter. In this setting, we examine how many global communication steps are need to compute the query. For the case of one communication round, we establish lower bounds, and then present a 1-round algorithm that performs optimally according to our bound. For multiple rounds, we show a tradeoff between the number of rounds and the amount of replication needed. An important implication of our results is that transitive closure cannot be computed in a constant number of rounds.

    • 11:40-12:00: Personalized Service Level Agreements in the Cloud, Jennifer Ortiz (PDF slides)

      Today's pricing models and SLAs in the Cloud are described at the level of compute resources (instance-hours or gigabytes processed) that may vary from one service to the next. Both conditions make it difficult for users to select a service, pick a configuration, and predict the actual analysis cost. To address this challenge, we propose a new abstraction, called a Personalized Service Level Agreement, where users are presented with what they can do with their data in terms of query capabilities, guaranteed query performance and fixed hourly prices.

    • 12:00-12:20: MyriaX: the Myria backend execution engine, Shengliang Xu (PDF slides)

      MyriaX is the the Myria backend execution system built from scratch. It has a relational data model with Operators as basic processing units. It uses data streaming instead of blocking for high processing performance. Iterative data processing, which is known as one of the core requirements for scientific computing, is supported natively. The system is currently still in development. But extensive benchmarking and experimentation on MyriaX have proven that the system is highly efficient.

  • Programming Languages & Software Engineering Session (CSE 403)

    • 11:15-11:20: Introduction and Overview, Dan Grossman
    • 11:20-11:40: Interactive Record/Replay for Debugging Web Applications, Brian Burg (PDF slides)

      During debugging, a developer must repeatedly and manually reproduce errant behavior in order to inspect different facets of the program’s execution. Existing behavior reproduction tools support replay via linear pause/play controls, but none is designed for interactive, random-access exploration during debugging tasks.

      We present Timelapse, a developer tool and record/replay infrastructure for disseminating and debugging behaviors in web applications. Timelapse is integrated with the browser and introduces negligible runtime overhead when capturing behaviors. Developers can use Timelapse’s user interface to interactively browse, visualize, and seek through recordings while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use it demonstrate failures in situ and share captured recordings with developers, improving bug report quality by obviating the need for detailed reproduction steps. Together, our tool and infrastructure support systematic bug reporting and debugging practices.

      Bio: Brian Burg is a Ph.D student in the Computer Science & Engineering Department at the University of Washington. He is advised by Michael Ernst and Andrew Ko. He is interested in tools, techniques and designs that help programmers understand and debug software more effectively.

    • 11:40-12:00: Input-Covering Schedules for Multithreaded Programs, Tom Bergan

      We propose constraining multithreaded execution to small sets of input-covering schedules, which we define as follows: given a program P, we say that a set of schedules S covers all inputs of program P if, when given any input, P's execution can be constrained to some schedule in S and still produce a semantically valid result.

      Our approach is to first compute a small S for a given program P, using symbolic execution, and then, at runtime, constrain P's execution to always follow some schedule in S, and never deviate. Our approach has the following advantage: because all possible runtime schedules are known a priori, we can seek to validate the program by thoroughly verifying each schedule in S, in isolation, without needing to reason about the huge space of thread interleavings that arises due to conventional nondeterministic execution.

    • 12:00-12:20: Securing Software via Design and Proof, Zach Tatlock

      Web browsers mediate access to valuable private data in domains ranging from health care to banking. Despite this critical role, attackers routinely exploit vulnerabilities in browser implementations to exfiltrate private data and take over the underlying system. I'll present QUARK, a browser whose kernel has been implemented and verified using Coq, a proof assistant that allows us to construct machine-checkable proofs. In particular, I'll discuss how we specify the correctness of the Quark kernel, prove that the implementation satisfies the specification, and finally show that the specification implies several security properties, including tab non-interference, cookie integrity and confidentiality, and address bar integrity.

  • Reconstructing and Exploring the World, Inside and Out (CSE 691)

    • 11:15-11:20: Introduction and Overview, Brian Curless
    • 11:20-11:40: Real-time 3D Dense Reconstruction with Commodity Hardware, Richard Newcombe

      We appear to be on the verge of a transformation in how we capture the world around us! Affordable and easy-to-use real-time 3D reconstruction in unstructured environments has the potential to give our smart phones, games consoles and future household robots a sense of their spatial environment and goes some way in bridging the gap between the physical and virtual worlds we live in.

      A core problem that must be solved to obtain such live 3D reconstructions is the real-time 'Simultaneous Localisation and Mapping' (SLAM) problem that defines the need to jointly estimate where the camera or cameras are in the environment together with the environments 3D shape.

      While a number of solutions to the SLAM problem have existed for decades, in recent years a rapid rise in commodity computing power together with the availability of cheap depth cameras has accelerated the potential for non experts to make use of the technology by enabling more robust SLAM algorithms that go beyond sparse point cloud representations of the world to build dense surface models of the environment in real-time.

      I will talk about how these key commodity hardware developments together with algorithmic advances in dense SLAM algorithms have given us the ability to build 3D models of the environment in real-time, enabling applications ranging from affordable 3D object capture in unstructured environments, augmented reality with rich interactions between the real and virtual worlds and improved robot navigation.

    • 11:40-12:00: The Visual Turing Test for Scene Reconstruction, Qi Shan

      We present the first large scale system for capturing and rendering relightable scene reconstructions from massive unstructured photo collections taken under different illumination conditions and viewpoints. We combine photos taken from many sources, Flickr-based ground-level imagery, oblique aerial views, and streetview, to recover models that are significantly more complete and detailed than previously demonstrated. We demonstrate the ability to match both the viewpoint and illumination of arbitrary input photos, enabling a Visual Turing Test in which photo and rendering are viewed side-by-side and the observer has to guess which is which. While we cannot yet fool human perception, the gap is closing.

    • 12:00-12:20: 3D Wikipedia: Using Online Text to Automatically Label and Navigate Reconstructed Geometry, Bryan Russell (PDF slides)

      We introduce an approach for analyzing Wikipedia and other text, together with online photos, to produce annotated 3D models of famous tourist sites. The approach is completely automated, and leverages online text and photo co-occurrences via Google Image Search. It enables a number of new interactions, which we demonstrate in a new 3D visualization tool. Text can be selected to move the camera to the corresponding objects, 3D bounding boxes provide anchors back to the text describing them, and the overall narrative of the text provides a temporal guide for automatically flying through the scene to visualize the world as you read about it. We show compelling results on several major tourist sites.

Session II

  • Systems and Networking 1 (CSE 305)

    • 1:30-1:35: Introduction and Overview, Arvind Krishnamurthy
    • 1:35-1:55: Simplifying Mobile/Cloud Applications with Sapphire, Adriana Szekeres

      This paper describes the motivation, architecture, and experience with Sapphire, an extensible programming system for simplifying the creation of mobile/cloud applications. Sapphire provides an integrated, distributed programming environment that bridges mobile client devices and cloud servers. A key feature of Sapphire is its separation of application logic from distributed system management and deployment logic; the programmer focuses on application-specific tasks, while a middleware management layer provides flexible support for distribution-specific tasks such as replication, caching, and failure recovery. Developers can therefore change deployment decisions without changes to their application code. The paper presents our experience building Sapphire applications and evaluates the benefits of the Sapphire approach.

    • 1:55-2:15: Improving Distributed Systems using Approximate Synchrony in Datacenter Networks, Dan Ports

      Distributed systems are traditionally designed independently from the underlying network, making worst-case assumptions about its behavior. Such an approach is well-suited for the Internet, where one cannot predict what paths messages might take or what might happen to them along the way. However, many distributed applications are today deployed in datacenters, where the network is more reliable, predictable, and extensible. We argue that in these environments, it is possible to co-design distributed systems with their network layer, and doing so can offer substantial benefits.

      I'll describe our recent work that uses this approach to improve state machine replication protocols, which are important both as the standard mechanism for ensuring availability of critical datacenter services and as the basis for distributed storage systems. Our approach is to use network-level techniques to provide a Mostly-Ordered Multicast primitive (MOM) with a best-effort ordering property for concurrent multicast operations. We use this primitive to build Speculative Paxos, a new replication protocol that relies on the network to order requests in the normal case, while still remaining correct if messages are delivered out of order. The results are effective: Speculative Paxos provides substantially higher throughput and lower latency than the standard Paxos protocol.

    • 2:15-2:35: Improving Power Efficiency Using Sensor Hubs Without Re-Coding Mobile Apps, Haichen Shen (PDF slides)

      In this talk, I describe MobileHub, a system that shows how unmodified mobile applications can be translated to seamlessly use sensor hubs. Sensor hubs are low-power hardware that perform sensing tasks autonomously, letting the CPU stay idle for longer. However, it is difficult for third-party applications to use these sensor hubs for their applications. Instead, MobileHub, automatically translates applications to use the sensor hub, to significantly improve power consumption. The key to our approach is to use data and control information flow tracking to learn how applications use sensor data. Based on the sensor usage pattern, we rewrite the application binary to perform efficient sensor processing on the sensor hub. We built a prototype and experiment on three applications downloaded from the Android marketplace. The results showed that MobileHub achieves power gains of up to 80%.

  • Wireless and Ubicomp Session #1 (CSE 403)

    • 1:30-1:35: Introduction and Overview, Shyamnath Gollakota
    • 1:35-1:55: WiSee: Whole-Home Gesture Recognition Using Wireless Signals, Sidhant Gupta

      The last two decades has seen an exponential proliferation of Wi-Fi devices. Wi-Fi capability today is incorporated in a diverse set of devices such as smart phones, gaming consoles, and video players. In this talk, we will show how to leverage the ubiquity of Wi-Fi to enable rich sensing capabilities such as gesture recognition. Specifically, we will present WiSee, a novel gesture recognition system that leverages wireless signals to enable whole-home sensing and recognition of human gestures. Since wireless signals do not require line-of-sight and can traverse through walls, WiSee can enable whole-home gesture recognition using few wireless sources. Further, it achieves this goal without requiring instrumentation of the human body with sensing devices.

    • 1:55-2:15: Ambient Backscatter: Battery-Free Communication, Aaron Parks (PDF slides)

      We will present the design of a communication system that enables two devices to communicate using ambient RF as the only source of power. Our approach leverages existing TV and cellular transmissions to eliminate the need for wires and batteries, thus enabling ubiquitous communication where devices can communicate among themselves at unprecedented scales and in locations that were previously inaccessible. To achieve this, we introduce ambient backscatter, a new communication primitive where devices communicate by backscattering ambient RF signals. Our design avoids the expensive process of generating radio waves; backscatter communication is orders of magnitude more power-efficient than traditional radio communication. Further, since it leverages the ambient RF signals that are already around us, it does not require a dedicated power infrastructure, as in traditional backscatter communication.

    • 2:15-2:35: AllSee: Gesture Recognition For All Devices, Bryce Kellogg

      In this talk, we introduce AllSee, a novel gesture recognition system that consumes three to four orders of magnitude lower power than the state-of-the-art systems today. AllSee enables always-on gesture recognition on mobile devices such as smartphones and tablets. We build prototypes and demonstrate that our system can detect and classify a set of eight gestures with classification accuracies as high as 97%. We believe that these results take us closer to the vision of ubiquitous gesture interaction which can be used for computing devices, no matter how low-end and power-constrained they are.

  • Center for Game Science Session (CSE 691)

    • 1:30-1:43: Introduction and Overview, Zoran Popović
    • 1:43-1:56: Brain Points: A Growth Mindset Incentive Structure for Educational Games, Eleanor O'Rourke (PDF slides)

      There is a growing interest in leveraging video games to inspire children to achieve educational goals. Games have many features that make them well-suited for learning: they can adapt to meet individual needs, provide continual feedback as students learn, and offer rich and engaging reward structures. However, educational games are not uniformly effective, and little is known about how in-game rewards affect children's ideas about intelligence and success. In this talk, we will show that the effectiveness of educational games can be increased by fundamentally changing their incentive structures to promote the growth mindset, or the belief that intelligence is malleable. We present "brain points," a system that directly rewards the growth mindset by incentivizing effort, use of strategy, and incremental progress. Through a study of 15,000 children, we show that children's motivation and persistence in the educational game Refraction is improved through the introduction of this unorthodox incentive structure. The effectiveness of this intervention stems from showing children how to practice and develop growth mindset behaviors such as effort and use of strategy through incentives.

    • 1:56-2:09: A Framework for Automation of Game Progression Design, Erik Andersen

      Creating levels and level progressions for computer games is challenging and time-intensive, and we still lack end-to-end automation of the entire content design process. We take a step towards full automation with a general framework that can rapidly generate and classify large numbers of levels with a wide range of difficulty, systematically introduce new game mechanics as the player masters them, and decrease complexity as necessary in order to provide remediation. We present results from 2,377 Refraction players showing that our automatically assembled progression of automatically generated levels can engage players for as long as the original, expert-crafted progression. Our system provides a natural way to improve the game with data and adapt to each player.

    • 2:09-2:23: Automatic Educational Experimentation with Games, Yun-En Liu (PDF slides)

      Running behavioral studies is often a slow and expensive process. This problem is especially acute in educational research, where finding willing participants is hard and clean experimental designs are difficult to achieve. Games and other online systems offer a new paradigm for educational research that can help alleviate these problems. By taking advantage of the much larger number of relatively inexpensive users, and the system's ability to randomize players into experimental categories, we can run complex, multi-level experiments automatically to determine the most effective educational interventions. I show how such a system, using data gathered from 45,000 players, is able to automatically experiment with different numberline representations. In the process, we discover an uncommon method of presenting numberlines that outperforms more traditional numberlines, showing the exploratory power of our automated method.

    • 2:23-2:35: Data-driven Adaptivity for Educational Games, Travis Mandel (Slides)

      In educational video games it is often possible to provide a personalized experience for every player. This ability, if properly leveraged, could allow us to meet the needs of every student in a way that traditional education cannot. However, this adaptivity is traditionally designed by experts, and due to the extreme challenge of creating a good adaptive experience, may not be very effective. In this talk, I will discuss a data-driven machine learning approach which allows us to discover strong adaptive progressions instead of relying solely on human expertise. I will present results from a deployment of our adaptation strategies to 2,000 real students, showing how even in this challenging setting, we can use data to find an adaptive policy that outperforms expert and random baselines by over 30% on a metric of student achievement.

Session III

  • Systems and Networking 2 (CSE 305)

    • 2:40-2:45: Introduction and Overview, Arvind Krishnamurthy
    • 2:45-3:05: Arrakis: The Operating System is the Control Plane, Simon Peter (PDF slides)

      Recent device hardware trends enable a new approach to the design of network servers. In a traditional operating system, the kernel mediates access to device hardware by server applications, to enforce process isolation as well as network and disk security. We have designed and implemented a new operating system, Arrakis, that splits the traditional role of the kernel in two. Applications have direct access to virtualized I/O devices, allowing most I/O operations to skip the kernel entirely. The Arrakis kernel operates only in the control plane. In this talk, I describe the hardware and software changes needed to take full advantage of this new abstraction, and I illustrate its power by comparing latency and throughput for a few popular network server applications on Arrakis vs. a well-tuned Linux implementation.

    • 3:05-3:25: Demystifying Page Load Performance with WProf, Aruna Balasubramanian (Slides)

      In this talk, I will describe WProf, a system that identifies bottlenecks during a Web page load process. There are a lot of techniques and "best practices" proposed to make web pages load faster, but they don't always work because they do not target the page load bottleneck. Identifying the bottleneck is tricky because the Web page load process involves several inter-dependent activities, and Web browsers complicate the situation with their own dependency policies. In this work, we extract the dependency policies imposed by four major browsers by systematically instrumenting test pages and by analyzing browser code. Using the extracted dependency policies, we build WProf, a system that can identify the bottleneck of any Web page at scale. We use WProf to show why caching may not help as much as it should and why optimizing object loads alone is not enough to speed up the web.

    • 3:25-3:45: How speedy is SPDY?, Sophia Wang (PDF slides)

      SPDY is increasingly being used as an enhancement to HTTP/1.1. To understand its impact on performance, we conduct a systematic study of Web page load time (PLT) under SPDY compared to HTTP. We experiment in a controlled network setting and using a page load emulator tool we developed to remove the variability of browser computation while preserving dependencies in the page load process. To identify the factors that affect PLT, we experiment with simple, synthetic pages and complete page loads based on the top 200 Alexa sites. We find that SPDY provides a modest improvement over HTTP for most pages, improving median PLT by 7% for our lower bandwidth and higher RTT scenarios, and increases PLT by about 6% for roughly 20% of pages in the worst scenario. Most SPDY benefits stem from the use of a single TCP connection, but the same feature is also detrimental under high packet loss. The benefits vary significantly from page to page because of dependencies in the page load process and the effects of browser computation. We also find that request prioritization is of little help, while server push has good potential; we present a push policy based on dependencies that gives comparable performance to mod_spdy while sending much less data.

  • Security Session (CSE 403)

    • 2:40-2:45: Introduction and Overview
    • 2:45-3:10: Third-Party Web Tracking: Detection, Measurement, and Prevention, Franziska Roesner (PDF slides)

      Web tracking, the practice by which websites identify and collect information about users, has received increased attention in recent years. In third-party web tracking, this information is collected by websites ("trackers") other than those visited directly by the user; these trackers are embedded by host websites in the form of advertisements, social media widgets, or web analytics platforms. In this talk, I will describe a client-side method for detecting and classifying different types of third-party trackers based on how they manipulate browser state, and I will overview an extensive measurement study of tracking in the wild that we carried out using this method. I will then describe TrackingObserver, an extensible browser-based platform for tracking detection and prevention. TrackingObserver is based on our client-side detection method and provides a number of advantages over conventional blacklist-based anti-tracking tools. Finally, I will demonstrate how to use the TrackingObserver platform to create a variety of applications, such as a graph-based tracking visualization and a tracker blocking dashboard.

    • 3:10-3:35: The Security and Privacy of Home Automation Systems: A Case-study of Compact Fluorescent Lamps, Temitope Oluwafemi

      With a projected rise in the procurement of home automation systems, we experimentally investigate security risks that homeowners might be exposed to by compact fluorescent lamps (CFLs), where the lamps themselves do not have network capabilities but are controlled by compromised Internet-enabled home automation systems. In this talk, I will be discussing the vulnerabilities discovered in two distinct Z-Wave home automation controllers. In addition, I will also discuss the experimental process of evaluating risks, homeowners may be exposed to, due to these compromised systems and a seemingly innocuous target - CFLs. I will analyze the results of our experiments, highlighting our findings about the possibility of physically harming homeowners through compromised home automation systems and CFLs.

    • 3:35-3:45: Helping Non-Experts Discover Security and Privacy Threats, Tamara Denning (PDF slides)

      The Security and Privacy Threat Discovery Cards are a tangible brainstorming toolkit to facilitate the exploration of potential security threats to a technological or information system. More broadly, the cards are intended to help develop a security mindset by exposing people to the broad spectrum of potential attacker motivations, the spectrum of resources they might have at their disposal, their potential creativity of attack methods, and the variety of negative impacts that system use or abuse can have on users and bystanders.

  • Interactions with/on Mobile Devices (CSE 691)

    • 2:40-2:45: Introduction and Overview, Gaetano Borriello
    • 2:45-3:05: SurfaceLink: Using Inertial and Acoustic Sensing to Enable Multi-Device Interaction on a Surface, Mayank Goel

      Using SurfaceLink users can make natural surface gestures to control association and information transfer among a set of devices that are placed on a mutually shared surface (e.g., a table). SurfaceLink uses a combination of on-device accelerometers, vibration motors, speakers and microphones (and, optionally, an off-device contact microphone for greater sensitivity) to sense gestures performed on the shared surface. In a controlled evaluation with 10 participants, SurfaceLink detected the presence of devices on the same surface with 97.7% accuracy, their relative arrangement with 89.4% accuracy, and various single- and multi-touch surface gestures with an average accuracy of 90.3%. A usability analysis showed that SurfaceLink has advantages over current multi-device interaction techniques in a number of situations.

    • 3:05-3:25: HandWave: Enabling Touch-Free Interaction on Mobile Devices, Krittika D'Silva (PDF slides)

      HandWave is a software library that enables touch-free interaction on a range of mobile devices. HandWave uses the built-in, forward-facing camera on a device and computer vision to recognize users' in-air gestures. Detected gestures can be used to replicate basic touchscreen functionality or alternatively mapped to higher-level action sequences. We evaluated HandWave through a controlled study that provides insight into the performance of touch-free interaction, finding that HandWave's touch-free gestures were intuitive and easy to learn, and participants were able to make gestures quickly and accurately enough to be useful for a variety of identified target applications. We also describe the programming effort required to integrate touch-free functionality into several popular mobile applications, and conclude that HandWave is a practical tool that can easily enable touch-free interaction on a variety of mobile devices.

    • 3:25-3:45: The HOPE Study: ODK Tables for Managing a Home-Based HIV Testing Program, Saloni Parikh (PDF slides)

      The HOPE Study, conducted by the Kenya Research Program at the University of Washington Department of Global Health, is a randomized control trial of home-based HIV testing and education for partners of pregnant mothers in Kisumu, Kenya. Couples are randomized to either home-based partner education and HIV testing (HPET) as part of routine pregnancy services or to standard ante-natal care (ANC). The intervention provides education and HIV testing to men in stable relationships with pregnant HIV-infected and HIV-uninfected women in order to improve overall health of women and infants, reduce risk for vertical transmission and horizontal transmission, and increase identification of men living with HIV and link them to care. Couples are followed up for uptake of HIV testing, facility delivery, exclusive breastfeeding and post-partum contraceptive use as well as linkage to HIV care. Cost-effectiveness of HPET is evaluated in order to inform future scale up of the intervention in Western Kenya, a region with high HIV-1 incidence during the pregnant/post-partum period and high sero-prevalence among men (~10%). Using a custom ODK Tables application (an ODK tool for visualizing and updating databases on mobile devices) as the entry-point to data collection, users collect client health information, send it to a server, and view aggregate data on their Android device. The HOPE Study Tables app allows the nurses and community health workers to screen patients for eligibility, follow up with the study participants and collect geo-point data for home visits.

Session IV

  • Feedback session (CSE 305)

    • HCI @ UW (CSE 403)

      • 3:50-3:55: Introduction and Overview, Alan Borning
      • 3:55-4:15: Eyes-Free Yoga, Kyle Rector (PDF slides)

        People who are blind or low vision may have a harder time participating in exercise classes due to inaccessibility, travel difficulties, or lack of experience. Exergames can encourage exercise at home and help lower the barrier to trying new activities, but there are often accessibility issues since they rely on visual feedback to help align body positions. To address this, we developed Eyes-Free Yoga, an exergame using the Microsoft Kinect that acts as a yoga instructor, teaches six yoga poses, and has customized auditory-only feedback based on skeletal tracking. We ran a controlled feasibility and feedback of Eyes-Free Yoga. We found participants enjoyed the game, and the extra auditory feedback helped their understanding of each pose. The findings of this work have implications for improving auditory-only feedback and on the design of exergames using depth cameras.

      • 4:15-4:35: Visualization Techniques for Assessing Textual Topic Models, Jason Chuang (PDF slides)

        Topic models aid analysis of text corpora by identifying latent topics based on co-occurring words. Real-world deployments of topic models, however, often require intensive expert verification and model refinement. I will present Termite, a visual analysis tool for assessing topic model quality. Termite uses a tabular layout to promote comparison of terms both within and across latent topics. In a series of examples, we demonstrate how Termite allows analysts to identify coherent and significant themes in a document collection.

      • 4:35-4:55: Fine-Grained Sharing of Sensed Physical Activity: A Value Sensitive Approach, Daniel Epstein (PDF slides)

        Personal informatics applications in a variety of domains are increasingly enabled by low-cost personal sensing. Although applications capture fine-grained activity for self-reflection, sharing is generally limited to high-level summaries. To help investigate this complex design space, we employ Value Sensitive Design to consider whether and how to share fine-grained step activity. We then design a set of data transformations that seek to maximize the benefits while minimizing the harms of detailed sharing. Finally, we conduct semi-structured interviews with 12 participants examining these scenarios and transformations. We distill results into a set of design considerations for fine- grained physical activity sharing.

    • Wireless and Ubicomp Session #2 (CSE 691)

      • 3:50-3:55: Introduction and Overview, Joshua R. Smith
      • 3:55-4:15: uTrack: 3D Input Using Two Magnetic Sensors, Ke-Yu Chen (Slides)

        While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this work, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet’s 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space. We also demonstrate example applications allowing users to interact with the computer using 3D finger input.

      • 4:15-4:35: Airwave: Non-Contact Haptic Feedback Using Air Vortex Rings, Sidhant Gupta

        Haptic feedback or more generally, the sense of touch is a critical component of our interactions with the physical world, however all existing systems, like the ubiquitous vibrotactile feedback (e.g., vibration mode on a cell phone) assume that the device is in physical contact with the user. However, this assumption is no longer universal, as non-contact and at-a-distance sensing (e.g., computer vision and speech recognition) is becoming more prevalent in our computing environments. The Microsoft Xbox Kinect, for example, allows immersive gaming and media control through computer vision and speech recognition, which require no physical contact between the user and the computer. This presents a new challenge to haptic feedback systems, and in this talk I will talk about AirWave, that uses air vortices to address the core questions of "how do we restore haptic realism to virtual environments when the user is meters away from the computer, and is neither carrying nor wearing an interface device?".

      • 4:35-4:55: Wirelessly Powered Displays, Aaron Parks (PDF slides)

        Though fundamental to a user interface, displays historically have been difficult to achieve in ultra low power and transiently powered devices, limiting the use cases for such devices. This work explores the use of modern e-paper display technology to produce a display which meets the strict power budget of a wirelessly-powered computing and communication device. Two display module prototypes will be described, one which is powered by and communicates with the NFC transceiver of a smartphone, and the other which harvests ambient RF energy as a power source. Applications include pervasive smart signage and wirelessly updatable displays.