Crowdsourcing Taxonomy Creation

Our goal is to break down large projects into work that can be done in parallel microtasks. Parallel crowd algorithms enable hundreds of people to contribute at once. Breaking tasks into short (typically less than 1 minute) micro tasks makes it more attractive for people to make a contribution in their free time. A major application for crowd algorithm is using people to organize data. For example, given 100 travel tips, we can create a taxonomy that shows what the popular topics are, what topics are missing, and let's users navigate the data more easily.


Activity-Based Prototyping of Ubicomp Applications

ActivityStudio is a suite of tools for prototyping and testing ubiquitous computing applications. It allows designers to incorporate large-scale, long-term human activities as a basis for design and speeds up ubicomp design by providing integrated support for modeling, prototyping, deployment and in situ testing. ActivityStudio prototypes can run on various target platforms, including mobile phones.


How much of your personal life is on Facebook, MySpace, blogs, Flickr, or YouTube? Are there things you would like to share with some people, but not everyone?

We propose that users protect semi-private personal content on the Internet behind questions of shared knowledge. For instance, "What is cousin Rodney's catchphrase?" can allow access from a hundred extended family members without giving them accounts, passwords, and tediously adding them to access control lists.


GSketch is a system that will help novice programmers create games and simulations.


iLearn is a set of tools for Apple's iPhone that allows us to collect annotated sensor traces from the accelerometer as well as compute accelerometer features and perform real-time activity classification. The software is built on the standard Apple SDK and includes models for inferring some common activity sets (e.g., exercise). This classification can then be used by additional applications on the iPhone or sent to web servers via the iPhone’s highly available internet connection.


K-Sketch allows ordinary computer users to create informal animations from sketches. Current tools for creating animation are extremely complex. This makes it difficult for designers to prototype animations and nearly impossible for novices to create them at all. Simple animation systems exist but severely restrict the types of motion that can be represented. To guide the design of K-Sketch, we conducted field studies into the needs of professional and novice animators.

Muscle-Computer Interfaces

We explore the feasibility of muscle-computer interfaces: an interaction methodology that directly senses and decodes human muscular activity rather than relying on physical device actuation or user actions that are externally visible or audible. As a first step towards realizing the muscle-computer interface concept, we conducted an experiment to explore the potential of exploiting muscular sensing and processing technologies for muscle-computer interfaces. We present results demonstrating accurate gesture classification with an off-the-shelf electromyography (EMG) device.


There is a lot of useful information on the Internet, but webmasters do not always present it in the best way. Reform lets end users put a new face on webpages, without subjecting them to the whims of a webmaster, and without learning to program themselves.

The Designers' Outpost

The Designers’ Outpost is a tangible user interface that combines the affordances of paper and a large physical workspace with the advantages of electronic media to support collaborative information design for the web. Based on an earlier ethnographic study, we have analyzed web site design practice and developed a system to support the practices used by designers during the early phases of information design.


Location-enhanced applications make use of the location of people, places, and things to provide useful services. We see an increasing number of location-enhanced applications, particularly on mobile devices. Topiary allows designers to quickly prototype location-enhanced applications using high-level abstractions, such as maps, scenarios and storyboards, and test these application prototypes with real users in the field without having to deploy a location infrastructure.


UbiFit is a mobile, persuasive technology that we developed in collaboration with Intel Labs Seattle to encourage individuals to self-monitor their physical activity and incorporate regular and varied activity into everyday life. It consists of three main components: (1) a glanceable display, (2) an interactive application, and (3) a fitness device.


The UbiGreen project aims to encourage environmental stewardship by combining low-cost sensors, inference, and user feedback to track resource usage and "reward" green behaviors. We are carrying out this work in the context of transportation, energy, and water usage.

Utility: Measure What Matters

A system is useless if it can’t motivate people to use it. Your interfaces must be clear and appealing, your tasks must be satisfying and rewarding, and your games must be fun.

Here is a way to evaluate a system’s ability to recruit use, or human attention. Let’s take any web interface and task. Now let’s post it as a job on Mechanical Turk and see how much we need to pay people to use the interface to complete the task. The less we have to pay workers, the better the interface and task.

Vocal Joystick

The goal of this project is to develop a novel system that we call the Vocal Joystick (VJ). This device will enable individuals with motor impairments to use vocal parameters to control objects on a computer screen (buttons, sliders, etc.) and ultimately electro-mechanical instruments (e.g., robotic arms, wireless home automation devices).


VoiceDraw is a drawing program designed to be controlled using only one's voice. Since no mouse, keyboard or stylus is required, it can be used by people with various forms of motor impairments to express themselves creatively.


In the VoicePen project, we seek to explore ways in which a digital stylus input can be augmented with voice input to provide added expressivity and control in various pen-based tasks, such as drawing and manipulation of animation objects.