Contact

CSE 546
206-685-2237
oskincs.washington.edu
Areas of interest: 

Computer architecture

Grappa: Latency Tolerant Distributed Shared Memory

Grappa is a modern take on software distributed shared memory (DSM) for in-memory data-intensive applications (e.g., ad-placement, social network analysis, PageRank, etc.). Grappa enables users to program a cluster as if it were a single, large, non-uniform memory access (NUMA) machine. Performance scales up even for applications that have poor locality and input-dependent load distribution. Grappa addresses deficiencies of previous DSM systems by exploiting abundant application parallelism, often delaying individual tasks to improve overall throughput.


Research Accelerator for Multiple Processors

Architecture research in the last 30 years has been a simulation-driven field. Simulation is flexible, and easy, but also easily divorced from reality. It is also slow, and as the systems we design become ever more complex, the speed of simulation forces us to explore less and less of processor execution time. Fortunately, FPGAs have come a long way, and it now seems plausible to utilize them for basic architeture research. RAMP aims to build a community wide infrastructure for carrying out architecture and software research on FPGAs.


Architectures for Quantum Computers

Quantum computers seem the subject of science fiction, but their tremendous computational potential is closer than we may think. Despite significant practical difficulties, small quantum devices of 5 to 7 bits have been built in the laboratory. Silicon technologies promise even greater scalability. To use these technologies effectively and help guide quantum device research, computer architects must start designing and reasoning about quantum processors now. However, two major hurdles stand in the way.


WaveScalar

Silicon technology will continue to provide an exponential increase in the availability of raw transistors. Effectively translating this resource into application performance, however, is an open challenge. Ever increasing wire delay relative to switching speed and the exponential cost of circuit complexity make simply scaling up existing processor designs futile. Our work is an alternative to superscalar design, called WaveScalar. WaveScalar is a dataflow instruction set architecture and execution model designed for scalable, low-complexity/high-performance processors.