Tuesday, February 25, 2003
9:35 - 10:50
HUB 200 A
Programming Languages and Compilers
HUB 200 B
HUB 200 C
11:00 - 12:15
HUB 200 A
Graphics and Vision
HUB 200 B
HUB 200 C
3:00 - 4:15
HUB 200 A
HUB 200 B
HUB 200 C
4:30 - 5:45
HCI and Educational Technology
HUB 200 A
HUB 200 B
HUB 200 C
- Simultaneous Object Recognition and Transformation Discovery Using Bilinear Models
"Many powerful feature-based vision techniques have been developed in recent years, yet often these approaches do not take image transformations into account. As a result, they produce image codes that are redundant because the same feature must be learned at multiple locations. We describe an algorithm for sparse coding based on a bilinear generative model of images. By explicitly modeling the interaction between image features and their transformations, the bilinear approach helps reduce redundancy in the image code and provides a basis for transformation-invariant vision. We present results demonstrating bilinear sparse coding of natural images. We also explore an extension of the model that can capture spatial relationships between the independent features of an object, thereby providing a new framework for parts-based object recognition."
- The Activity Compass: Learning and Predicting Daily Activities
"The Activity Compass is one example of an application being developed as part of the Assisted Cognition initiative at the University of Washington Computer Science and Engineering Department. It's aim is to assist people with reduced memory and problem-solving abilities due to Alzheimer s Disease or other disorders in community transporation tasks. This involves using ubiquitous computing techniques to sense aspects of an individual's location and environment, learning to interpret patterns of everyday behavior, and recognize signs of distress, disorientation, or confusion, and finally intervening in a natural and expected way to avoid problems."
- Distributed Multi-Robot Exploration
"A set of robots working together as a team can more efficiently map an environment than multiple robots working independently. In this talk, we will describe coordinated multirobot mapping from a single start position. We then describe the problem of multirobot mapping with robots starting from multiple arbitrary locations. The key to this problem is the estimation of the overlap between maps from different robots. We propose a technique which relies on information learned using a hierachical Bayesian approach from prior explored environments."
- Automatically Proving Compiler Optimizations Correct
"This talk describes a technique for automatically proving compiler optimizations sound, meaning that their transformations are always semantics-preserving. I first present a domain-specific language for implementing optimizations as guarded rewrite rules. Optimizations operate over a C-like intermediate representation including unstructured control flow, pointers to local variables and dynamically allocated memory, and recursive procedures. Then I describe a technique for automatically proving the soundness of optimizations implemented in this language. The technique requires only a small set of optimization-specific proof obligations to be discharged for each optimization by an automatic theorem prover.
We have written a variety of forward and backward intraprocedural dataflow optimizations in our language, including constant propagation and folding, branch folding, (partial) redundancy elimination, (partial) dead assignment elimination, and simple forms of points-to analysis. We have implemented our proof strategy with the Simplify automatic theorem prover, and we have used this implementation to automatically prove our optimizations correct. Our system also found many subtle bugs during the course of developing our optimizations."
- Hydro: Language Support for Loosely Coupled Components (PDF Slides)
"A ubiquitous computing system consists of networked computing devices that are distributed throughout a physical environment. Ubiquitous computing applications inherently require hardware and software components that are numerous, diverse, and long-running. Therefore, the system must support heterogeneity and incremental evolution. To achieve these properties, components' software interfaces must be loosely coupled -- they must communicate with diverse neighbors, and they must not break needlessly when their neighbors change. In this work, we propose that components communicate using self-describing, semi-structured messages, and that the programmer use pattern matching to define the handling of these messages. Using an existing ubiquitous computing infrastructure as a context, we have designed the Hydro language, which supports semistructured messaging for the construction of systems of loosely coupled components. Joint work in collaboration with the Intel Research Lab @ Seattle."
- Scalable Safe Regression Test Elimination
"Regression testing is used to determine whether a program modification introduces new errors into previously tested code. Regression test selection is concerned with reducing the cost of regression testing by eliminating unecessesary tests. A safe technique guarantees that no test is eliminated that will cause the modified program to behave differently. This talk will present a safe, automated technique for regression test selection that strives for both precision and scalablility."
- Parallel Array Language Support for Arbitrarily Remapping Data
"The data redistribution or remapping functions, gather and scatter, are of long-standing in high-performance computing, having been included in Cray Fortran for decades. In this talk, I will introduce the ZPL parallel programming language and present its remap operator. The remap operator is a highly-general array operator with powerful gather and scatter capabilities unmatched by other array languages. I will discuss the implementation of the remap operator and show that it achieves comparable performance to highly-tuned Fortran + MPI code."
- Measured Causes of Internet Path Inflation
"Researchers have shown that the Internet exhibits path inflation -- end-to-end paths can be significantly longer than necessary. We present a trace-driven study of 65 ISPs that characterizes the root causes of path inflation, namely topology and routing policy choices within an ISP, between pairs of ISPs, and across the global Internet."
- Pinpointing Performance Faults Along Internet Paths
"The Internet is a tremendous engineering success. When it works, which is the majority of the time, it works well. Yet, almost paradoxically, when it fails to perform as expected it is nearly impossible to tell what has happened. For example, we do not yet have reliable tools for a user or operator to determine which router is dropping packets or which link along the path is congested. In this talk, we consider the problem of finding performance faults along Internet paths using only the features present in the deployed infrastructure. We will then discuss our preliminary results on pinpointing congestion."
- Flexibility in DHT Routing Algorithms: Implications for Resilience, Proximity and Convergence
"The literature on DHT routing algorithms is impressively large, but it mostly consists of independent proposals for routing algorithms, providing little general insight into the implications of the different design choices that these routing algorithms have made. Existing work on comparing DHTs takes a black-box approach, measuring the performance of various systems built on top of the different DHT routing algorithms. In contrast, in this paper we attempt to compare the various DHT routing algorithms by understanding the effect of their underlying routing geometry on higher level DHT properties. Existing DHT routing geometries include hypercubes, rings, tree-like structures, and butterfly networks. We argue that these basic geometric approaches differ in the degree of flexibility they provide in choosing neighbours and next hop paths, and show how flexibility affects static resilience and the ability to capitalize on proximity in DHTs. Our findings suggest that ring geometries allow the greatest flexibility out of the grometries, and therefore they have good static resilience, convergence, and proximity properties."
Artificial Intelligence: Dieter Fox
Programming Languages and Compilers: Craig Chambers
Networking: David Wetherall
- An Analysis of Internet Content Delivery Systems
"In the span of only a few years, the Internet has experienced an astronomical increase in the use of specialized content delivery systems, such as content delivery networks and peer-to-peer file sharing systems. Therefore, an understanding of content delivery on the Internet now requires a detailed understanding of how these systems are used in practice.
This talk examines content delivery from the point of view of four content delivery systems: HTTP web traffic, the Akamai content delivery network, and Kazaa and Gnutella peer-to-peer file sharing traffic. We collected a trace of all incoming and outgoing network traffic at the University of Washington, a large university with over 60,000 students, faculty, and staff. From this trace, we isolated and characterized traffic belonging to each of these four delivery classes. Our results (1) quantify the rapidly increasing importance of new content delivery systems, particularly peer-to-peer networks, (2) characterize the behavior of these systems from the perspectives of clients, objects, and servers, and (3) derive implications for caching in these systems."
- Scale and Performance in the Denali Isolation Kernel
"This talk describes the Denali isolation kernel, an operating system architecture that safely multiplexes a large number of untrusted Internet services on shared hardware. Denali's goal is to allow new Internet services to be ``pushed'' into third party infrastructure, relieving Internet service authors from the burden of acquiring and maintaining physical infrastructure. Our isolation kernel exposes a virtual machine abstraction, but unlike conventional virtual machine monitors, Denali does not attempt to emulate the underlying physical architecture precisely, and instead modifies the virtual architecture to gain scale, performance, and simplicity of implementation. As a result, Denali can efficiently run over 10,000 virtual machines on commodity hardware."
- SIGGRAPH Submission Extravaganza
"The GRAIL (Graphics and Imaging Laboratory) group at the University of Washington has been extremely active doing research in a variety of areas in computer graphics and computer vision. In this session, we will present a multimedia overview of seven (out of nine) of our recently submitted papers to SIGGRAPH. The topics are wide ranging and promise to have impact on everything from web browsing to ergonomic design, from video game simulation to film production, and much more!
Our cast of presenters includes:
- Brett Allen
- Yung-Yu Chuang
- Mira Dontcheva and Gary David Yngve
- Wilmot Li
- Antoine McNamara and Adrien Treuille
- Jia-Chi Wu
- Douglas Zongker
Because these papers are hot off the presses and currently under review, we dare not put the titles on the web. You just have to come and see for yourself!"
- Crossing the Structure Chasm (Powerpoint Slides)
"Data and data management applications that we know of today, exist in one of two realms - the world of structured data, i.e. of relational, XML databases, etc., and the world of unstructured data, i.e. of text documents, web-pages, etc. A vast majority of data today lies in an unstructured form where most database management techniques are inapplicable.
Unstructured data is easy to author, query (keyword queries on Google) and share. On the contrary, due to the need to adhere to exact schemas, structured data is harder to author, query and share. Structured data, however, has the benefit of richer query languages and gauranteed exact answers.
We propose to broaden the application of data management tools by bridging the chasm (the structure chasm) that exists between these two worlds by importing some of the attractive properties of the unstructured world to the structured world. In this talk I shall briefly mention three research directions that are being pursued in the University of Washington that will enable us to cross this chasm. (1) We will entice people to create structured data by providing a authoring environment (MANGROVE) and a scheme of instant gratification. (2) We will encourage the sharing of structured data by providing a peer data management system (PIAZZA) where global data sharing is achieved using local mappings between schemas. (3) We will facilitate the easy authoring of schemas and mappings between schemas using tools that exploit statistics computed over corpora of structured data.
I will talk primarily about our motivations and long term goals and the current status of the work in our peer data management system and our corpus of structured data."
- Evolving the Semantic Web with Mangrove (Powerpoint Slides)
"Despite numerous proposals for its creation, the "Semantic Web" has yet to achieve widespread adoption. Recently, some researchers have argued that participation in the semantic web is too difficult for "ordinary" people, limiting its growth and popularity.
In response, we introduce Mangrove, a system whose goal is to evolve a portion of the semantic web from the enormous volume of facts already available in HTML documents. Mangrove seeks to emulate three key conditions that contributed to the explosive growth of the web: ease of authoring, instant gratification for authors, and robustness of services to malformed and malicious information. In the HTML world, a newly authored page is immediately accessible through a browser; we mimic this feature in Mangrove by making semantic content instantly available to services that consume the content and yield immediate, tangible benefit to authors.
We have designed and implemented a Mangrove prototype, built several semantic services for the system, and deployed those services in our department. This talk describes Mangrove's goals, presents the system architecture, and reports on our implementation and deployment experience. Overall, Mangrove demonstrates a concrete path for enabling and enticing non-technical people to enter the semantic web.
- Evaluation of XPath Queries over Streaming XML
"In the XML stream processing problem we are given a large collection of XPath filters that have to be evaluated on an incoming stream of XML documents. This problem occurs in applications like XML packet routing, which are becoming increasingly more important. In this talk, we describe a new approach to this problem which essentially eliminates all common subexpressions between predicates belonging to different XPath filters by translating them into a single deterministic pushdown automaton."
Systems: Brian Bershad
Graphics and Vision: Brian Curless
Databases: Dan Suciu
- Exact: A Reliable Natural Language Interface to Household Appliances
"As household appliances grow in complexity and sophistication, they become harder and harder to use, particularly because of their tiny display screens and limited keyboards. This talk will describe a strategy for building natural language interfaces to appliances that circumvent these problems. Our approach leverages decades of research on natural language interfaces to databases by reducing the appliance problem to the database problem; the reduction provably maintains desirable properties of the database interface. The Exact system is an implementation of the reduction that responds to users' requests only when it is certain that it has understood the user. Empirical results show that Exact is capable of handling more than 80% of user requests. For those cases where Exact does perform some action, our results show that it makes no mistakes in understanding the user."
- Restricted-Range Personal Area Networks
"Digital communication between nearby devices is necessary for many mobile and ubiquitous computing applications, including object identification, context awareness, and customization. But existing short-range communication technologies do not suit these applications: the range of 802.11 and Bluetooth leads to ambiguities in selecting the right hosts, the line-of-sight of infrared requires user effort to orient transceivers, and the power consumption of RFID readers complicates the development of mobile systems. This talk will describe how to limit the range of Electric Field Personal Area Networks so communication is generally possible only between objects that are simultaneously touched. I will present some new data that includes scenarios with multiple users."
- Design and Implementation of Multisensor Location Systems (Powerpoint Slides)
"Location is essential information for many emerging computing systems, but existing location systems suffer from two acute problems. First, design abstractions are poor. Second, substantial benefit is lost by not fusing multiple sensor technologies. In this talk I present work addressing both concerns. A survey of existing systems led us to create the seven-layer Location Stack, a useful design vocabulary for location systems akin to the Open System Interconnect (OSI) abstractions for computer networks. Our sensor fusion implementation uses novel machine learning techniques including adaptive particle filtering and Monte Carlo localization to provide a joint uncertainty representation which still preserves individual sensor error models. We support a wide range of anonymous and id-sensor hardware including infrared badges, active and passive RFId, laser range finders, ultrasound beacons, GPS, WiFi, cellular E-OTD, and others."
- Support Vector Machine Classification of Biological Data: Protein Sequences, Mass Spectra, Microarray Profiles, etc.
"The support vector machine is a supervised learning algorithm, useful for recognizing subtle patterns in complex data sets. The algorithm performs discriminative classification, learning by example to predict the classifications of previously unseen data. The algorithm has been applied in domains as diverse as text categorization, image recognition, and hand-written digit recognition. Recently, support vector machines have been applied in numerous bioinformatics domains. In this talk, I will briefly describe the support vector machine algorithm, discuss some reasons for its wide success, and demonstrate its application to problems in computational biology."
- Finding Motifs in Protein Sequences by Phylogenetic Footprinting
"Phylogenetic Footprinting is a technique that tries to find conserved regions in a set of biological sequences that correspond to each other from different organisms. In this talk, a dynamic programming algorithm will be presented that finds the conserved regions in protein datasets. The work is an extension of previous work done by Blanchette et al. to find motifs in DNA sequences. Examples of two protein datasets will be shown where our tool finds the conserved regions whereas popular alignment tools fail. We believe in future, as sequences from more organisms become available, there will be more such datasets where our technique will perform better than existing alignment tools."
- Temporal Sequence Learning with Dynamic Synapses
"Recent experiments have shown that neocortical synapses exhibit both short-term plasticity and spike-timing dependent long-term plasticity. It has been suggested that changes in short-term plasticity are mediated by a redistribution of synaptic efficacy. Here we propose a simple model of the interaction between spike-timing dependent plasticity and short-term plasticity. We show that the model captures the synaptic behavior seen in experiments on redistribution of synaptic efficacy. Results from our simulations suggest that spike-timing dependent redistribution of synaptic efficacy offers neocortical neurons a potentially powerful mechanism for learning spatiotemporal patterns in the input stream."
- Interdisciplinary Collaborations on Discovering Regulatory Elements
"I will give a brief overview of some of our collaborations with biologists around campus, followed by an example in more detail to give an idea of what might be involved in such interdisciplinary collaborations."
- Video Compression Using Bit-Plane Coding
"Standard video compression algorithms use a three-stage process of motion compensation, image transform, and quantization of transform coefficients. Some recent work suggests that replacing quantization with bit-plane coding (i.e. encoding the i-th bit of all coefficients simultaneously), might provide some benefits. Our work applies a simple coding scheme called group-test coding, to encode coefficient bit-planes. At the same bit-rate, the new method, called GTV, achieves better signal-to-noise ratio than comparable quantization-based methods, and enables better rate control. (Work by Richard Ladner, Ed Hong, Jeff West, Mike Ringenburg, and Gidon Shavit)."
- Compression of Hyperspectral Satellite Images
"Hyperspectral sattelite images consist of huge amounts of data divided into 224 separate bands. As in any situation that requires storage and transport of such data, efficient compression techniques are needed. We have explored various techniques that will save bandwidth with acceptable fidelity trade-offs. I will present one of the techniques that we have studied so far. The major component of this technique involves predicting one band of the image from another which increases the efficiency of current compression algorithms by a noticable factor. Unlike most situations, these images are compressed in one location, namely the satellite, and decompressed in another place, the Earth. The prediction ordering must be synchronized between both places while preserving the efficiency of compression. We used the minimum spanning tree algorithm for directed graphs to find the optimal ordering in which to compress the bands. This is joint work with Scott Hauck, Eve Riskin, Richard Ladner, Agnieszka Miguel, and Chris Parrish."
- Enhanced Sequitur for Finding Structure in DNA and Music
"Sequitur is an algorithm, developed in the mid 1990's by Witten and Neville-Manning, that can compress strings and simultaneously find hierarchical structure in them. We describe a methodology to extend Sequitur to find additional structure in strings that are known to have specific kinds of structure. We enhanced Sequitur to identify reverse complements in DNA sequences, and inversions, reversals and transpositions in digital sheet music. Results from our implementations show that the enhanced Sequitur does a better job of finding structure in human DNA sequences and in some fugues of Johann Sebastian Bach. This is joint work with Erin Earl."
Ubiquitous Computing: Gaetano Borriello
Computational Biology: Martin Tompa
Compression: Richard Ladner
- Tablet PC Based Presentation of University Lectures
"Presentation technologies from the chalkboard to the data projector have opened up new possibilities for educational practice. Our research goal is to take advantage of new technologies---including the Tablet PC, wireless networking, and handheld devices---to improve the educational experience in the university classroom. We want to design systems that leverage these technologies to increase the amount of interaction and the flexibility of presentation for university classes while maintaining the benefits of online materials. We also want to support a pedagogy which includes direct student participation, classroom assessments and active learning. We present our work on three related projects which give instructors tools to use in the classroom. The projects are a Tablet PC based presentation system, a classroom feedback system, and a structured interaction system. The presentation system is currently in use in six UW computer science courses."
- UrbanSim: Integrated Land Use, Transportation and Environmental Modeling
"Patterns of land use and available transportation systems play a critical role in determining the economic vitality, livability, and sustainability of urban areas. To support informed deliberation on these issues, we have been developing UrbanSim, a reusable urban land use and transportation modeling system. Our purpose is to provide a tool for citizens' groups, urban planners, elected officials, and others to help predict future patterns of urban development and impact under different possible scenarios over periods of 20 or more years. We have recently set up a partnership with Puget Sound Regional Council, and are now beginning to apply UrbanSim to this region, as well as to a number of other regions in the U.S. The talk will include an overview of the modeling activity, followed by a discussion of a number of HCI aspects of the system, including providing more effective ways of understanding the results from and interacting with complex simulations, and ways of linking stakeholder values with design choices in simulations and their interfaces. For more information on the project, please see www.urbansim.org."
- Helping Teachers Identify Student Misconceptions in an Online Discussion Group Environment (Powerpoint Slides)
"We describe the Facet Rule Learner component of the Infact System. Infact is a web-based environment developed specifically for online student discussion and non-invasive assessment. It includes a number of features specifically designed to support teaching and assessment, including group management, graphical and textual data and annotation and assessment support. The assessment support allows teachers to make comments and is connected to a database of common misconceptions held by students in a variety of subjects. This talk focuses on the development of a machine learning component that will assist teachers in making assessments by learning to recognize student posts that are indicative of particular misconceptions. We discuss the design of the learning component including several challenges such as how to obtain useful negative evidence and learn from it, and how to incorporate domain specific knowledge into the learning algorithm. We also propose an extension to our rule language that will allow us to identify student misconceptions from student sketches as well as textual data."
- A New Algebraic Foundation for Quantum Programming Languages (Powerpoint Slides)
"Recent advances in quantum device technology, including demonstrated multi-qubit computers and proposals for scalable quantum devices, have made it seem increasingly likely that we will have usable quantum computers and nothing to run upon them. Very few useful quantum algorithms have been developed, and our belief is that the community has been unable to produce new ones because of the lack of an expressive quantum formalism. In this presentation, we describe an algebraic notation that attempts to make explicit quantum relationships and effects (like superposition and entanglement) without sacrificing compactness of representation."
- A MAC Based on Matrix Groups (PDF Slides)
"We present a new construction based on modular groups that is competitive with or better than known MAC algorithms. A novel element of our construction is to embed an input into a matrix with determinant plus or minus one and analyze it using arithmetic properties of the determinants of certain types of matrices. We believe this construction based on modular subgroups of this matrix group will provide a new cryptographic primitive for stream ciphers and secure hash functions, in addition to MACs. This primitive has simple and efficient implementations on modern processors.
Joint work with Ramarathnam Venkatesan of Microsoft Research."
- Algorithmic Coding Theory: Recent Progress and Directions
"Algorithmic coding theory deals with constructing combinatorial objects called error-correcting codes together with good algorithms for decoding them in the presense of errors. The last few years have witnessed several developments in this area along multiple facets (low density parity check codes, turbo and turbo-like codes, linear-time expander-based codes, and list decoding, to name a few). In this talk, I'll mention some of the constructions I have worked on, namely linear time encodable/decodable codes and list decodable codes to correct large amounts of adversarial noise, and, time permitting, also discuss some challenges that lie ahead."
- Silicon Mixture Models
"We present the bump mixture model, a statistical model for analog data where the probabilistic semantics, inference, and learning rules derive from low-level transistor behavior. The bump mixture model relies on translinear circuits to perform probabilistic inference, and floating-gate devices to perform adaptation. This system is low power, asynchronous, and fully parallel, and supports various on-chip learning algorithms. In addition, the mixture model can perform several tasks such as probability estimation, vector quantization, classification, and clustering. We tested a fabricated system on clustering, quantization, and classification of handwritten digits and show performance comparable to the E-M algorithm on mixtures of Gaussians."
- Graphics Hardware: Beyond the Visual
Chris J. Thompson
"Graphics processors are evolving much more rapidly than conventional CPUs. For instance, the NVIDIA GeForce3 chip -- now two generations old -- contains more transistors than the Intel Pentium IV, and today one can purchase a graphics card capable of 1.2 trillion micro-operations per second for about $150. (We'll show some breathtaking visual examples.) Along with this enormous raw speed, recent graphics architectures have begun to emphasize versatility, offering rich new ways to programmatically reconfigure the graphics pipeline. A first generation of compilers designed to target these GPUs [Graphics Processing Units], rather than conventional CPUs, is emerging. In this presentation, we will discuss recently published research demonstrating that we can harness low-cost, commodity graphics hardware to accelerate non-graphics problems, especially ones where traditional vector or stream processors might traditionally be used. Potential application areas include computational biology and genetics, molecular dynamics, and mechanical engineering, as well as more traditional graphics problems."
- A Multiprocessor Architecture for IP Routing
"Traditionally, network processors have been multiprocessors without caches. In the context of Internet Protocol (IP) routing, we investigate how small caches will decrease worst case latency and increase throughput."
"Silicon technology will continue to provide an exponential increase in the availability of raw transistors. Effectively translating this resource into application performance, however, is an open challenge. Ever increasing wire-delay relative to switching speed and the exponential cost of circuit complexity make simply scaling up existing processor designs futile. In this paper, we present an alternative to superscalar design, WaveScalar.
WaveScalar is a dataflow instruction set architecture and execution model designed for the construction of intelligent computation caches. Each instruction in a WaveScalar binary executes in place in the memory system and explicitly communicates with its dependents in dataflow fashion. WaveScalar architectures cache instructions and the values they operate on together in a WaveCache, a simple grid of ``processor-in-cache'' nodes. By co-locating computation and data in physical space, the WaveCache minimizes long wire, high-latency communication. This paper introduces the WaveScalar instruction set and an implementation based on current technology, which is evaluated for its performance potential. Results of the SPEC and mediabench applications demonstrate a factor of 2-4 performance improvement compared to an aggressively configured superscalar design."