Research Showcase
The Paul G. Allen School’s annual Research Showcase and Open House highlights both the breadth and depth of computing innovation.
Celebrating Research
The Allen School’s premier in-person event, the Annual Research Showcase, gives our affiliates, alumni, and friends a chance to meet our students and faculty through technical talks during the day, followed by an open house with a reception, poster session, and the awarding of the Madrona Prize and People’s Choice Prize in the early evening.
The fall 2025 event will be held in the Bill & Melinda Gates Center for Computer Science & Engineering on: Wednesday, October 29, 2025
Session Topics At A Glance:
- Mental & Cognitive Health Support for All
- Transparent & Equally Beneficial AI
- Safe & Reliable Computing Systems
- Technologies that Sustain People and the Planet
- Mobile & Augmented Intelligence
- Health
- Graphics & Vision
- Robotics
- Educational Research & Outreach
- Natural Language Processing (NLP)
- Deep Industry & Allen School Collaborative Research Projects
During the day – Research Presentations:
The day begins at 10:00 a.m. with registration (and coffee!) in the Singh Gallery, located on the fourth floor of the Gates Center.
- 10:00 a.m. to 10:30 a.m.: Registration (and coffee!)
- 10:30 a.m. to 12:30 p.m.: Technical sessions (see below) in multiple tracks focused on specific themes
- 12:30 to 1:30 p.m.: Keynote talk + Lunch + Networking
- 1:30 to 4:45 p.m.: Technical sessions in multiple tracks focused on specific themes
During the early evening – Open House:
Join us for a festive evening in the Allen Center.
- 5:00-7:15 p.m.: Open house: reception, poster session, and socializing
- 7:15-7:30 p.m.: Presentation of the Madrona Prize and People’s Choice Prize
Getting here: Directions, parking and hotel information
2025 Research Showcase Agenda
Click the links below to learn more about each session’s technical talks, including the speaker, subjects, locations, and times.
Speaker & Session Details
Schedule of Events – Tuesday, October 29, 2024
Morning
Time: 9:00 – 10:15am
Event: Research Group Affiliates – Coffee and Networking
Location: Various labs
Time: 10:00 – 10:30am
Event: ABET Feedback Session
Location: Zillow Commons, 4th floor Gates Center
Time: 10:00 – 10:30am
Event: Registration and coffee
Location: Singh Gallery, 4th floor Gates Center
Time: 10:30 – 11:10am
Event: Welcome and Overview by Magda Balazinska and Shwetak Patel + various faculty on research areas
Location: Zillow Commons, 4th floor Gates Center
Session I
Time: 11:15am – 12:20pm (*current times listed may change)
NLP/RL (Gates Center, Room 271)
11:15-11:20: Introduction and Overview, Luke Zettlemoyer
11:20-11:35: Query efficient algorithms to find Nash Equilibrium in two player zero sum games, Arnab Maiti
Abstract: Two-player zero-sum games are a widely studied topic with applications ranging from economics and optimization to modern AI. Motivated by recent developments in areas such as dueling bandits and AI alignment, we revisit the classical problem of two-player zero-sum games and ask the following question: Can one find the Nash equilibrium of a two-player zero-sum game without knowing all of its payoffs? We answer this question in the affirmative, presenting simple and elegant algorithms that achieve this.
Biography: Arnab Maiti is a fourth-year PhD student at the Paul G. Allen School of Computer Science & Engineering, University of Washington. He is co-advised by Prof. Kevin Jamieson and Prof. Lillian Ratliff. His research focuses on design and analysis of algorithms in the areas of game theory and learning theory. Recently, he has been getting interested in the space of multi-agent approaches to AI and AI alignment.
11:35-11:50: Self-RedTeam: Improving LLM Safety via Adversarial Self-Play, Mickel Liu
Abstract: Conventional LLM safety alignment is a reactive cat-and-mouse game where target models perpetually lag behind new attacks from users. This paper introduces SELF-REDTEAM, an online reinforcement learning framework that forces a single LLM to improve its safety alignment by simultaneously playing both sides of a red-teaming game: attacker and defender. The model co-evolves by generating adversarial prompts to exploit its own vulnerabilities and then learning to safeguard against them, all while the outcome reward adjudicates the game. Both roles leverage hidden Chain-of-Thought (CoT) to develop private strategies that are invisible to the opponent, a mechanism shown to be critical for discovering diverse attacks. This dynamic self-play approach proves highly effective, uncovering significantly more diverse attacks and improving safety robustness by as much as 95% compared to models trained with standard safety procedures.
Biography: Mickel Liu is a second-year PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he is co-advised by Prof. Natasha Jaques and Prof. Luke Zettlemoyer. Previously, he earned his master’s degree in Computer Science from Peking University. His current research focuses on self-improving LLM agents on self-generated data and the multi-agent training of LLM.
11:50-12:05: Sample, Don’t Search: Rethinking Test-Time Alignment for Language Models, Gonçalo Faria
Abstract: Increasing test-time computation has emerged as a promising direction for improving language model performance, particularly in scenarios where model finetuning is impractical or impossible due to computational constraints or private model weights. However, existing test-time search methods using a reward model (RM) often degrade in quality as compute scales, due to the over-optimization of what are inherently imperfect reward proxies. We introduce QAlign, a new test-time alignment approach. As we scale test-time compute, QAlign converges to sampling from the optimal aligned distribution for each individual prompt. By adopting recent advances in Markov chain Monte Carlo for text generation, our method enables better-aligned outputs without modifying the underlying model or even requiring logit access. We demonstrate the effectiveness of QAlign on mathematical reasoning benchmarks (GSM8K and GSM-Symbolic) using a task-specific RM, showing consistent improvements over existing test-time compute methods like best-of-n and majority voting. Furthermore, when applied with more realistic RMs trained on the Tulu 3 preference dataset, QAlign outperforms direct preference optimization (DPO), best-of-n, majority voting, and weighted majority voting on a diverse range of datasets (GSM8K, MATH500, IFEval, MMLU-Redux, and TruthfulQA). A practical solution to aligning language models at test time using additional computation without degradation, our approach expands the limits of the capability that can be obtained from off-the-shelf language models without further training.
Biography: Gonçalo Faria is a PhD student at the University of Washington, advised by Professor Noah Smith. Originally from Portugal, he previously studied at Instituto Superior Técnico in Lisbon, where he worked on Causal Discovery with André Martins and Mário Figueiredo. His current research focuses on creating methods to unlock the structured output space of foundation models, pushing the frontier of their capabilities beyond what standard sampling can achieve.
Educational Research and Outreach (Gates Center, Room 371)
Faculty, students, and staff from the Allen School and the Center for Learning, Computing, & Imagination will present about research and outreach efforts they are leading. These efforts involve youth as well as adults, CS majors and non-majors, and illustrate the breadth of our efforts to enrich and expand participation in computing education
11:15-11:20: Introduction and Overview, Benjamin Shapiro, Associate Professor and Associate Director, Allen School, Co-Director, Center for Learning, Computing, & Imagination
11:20-11:35: Scaffolding Youths’ Critical Exploration and Evaluation of AI Systems, Jaemarie Solyst
Abstract: With great recent innovation in AI, youth are surrounded by AI systems. Youth use AI for learning, entertainment, social interaction, creativity, and more. However, AI can often have significant limitations and socio-ethical implications worth considering. How can youth be supported in critically exploring and evaluating AI system behavior, especially in the age of generative AI (GenAI)? In this talk, we share findings from a study where we investigated how culturally responsive computing theories and intersectional identities can support Black Muslim teen girls develope critical literacy and creativity with and about GenAI. Specifically, we show how fashion design with GenAI exposes affordances and limitations of current GenAI tools. As the learners used GenAI to create depictions of their fashion collections, they encountered the socio-ethical limitations of AI. Learners used their intersectional identities and funds of knowledge to assess and ideate GenAI behavior. Overall, these hands-on, creativity-based interactions with GenAI and critical discourse supported their development of critical AI literacy.
Biography: Jaemarie Solyst is a postdoc at the University of Washington in the School of Computer Science & Engineering and the Center for Learning, Computing, and Imagination. She recently received her PhD from Carnegie Mellon University in the Human-Computer Interaction Institute. Her research is at the intersection of responsible AI, human-computer interaction, and computing education. Specifically, she investigates how young people can be empowered in the age of AI with critical AI literacy and agency to participate in responsible AI processes.
11:35-11:50: Teaching Accessibility in Data Structures and Algorithms, Kevin Lin
Abstract: How might we teach accessibility across the computing curriculum? This short talk introduces redesign as a conceptual frame for teaching accessible design in an undergraduate Data Structures and Algorithms course with the ultimate goal of preparing students to incorporate accessible design practices in the software design and development process. We will also discuss the principles of accessible design pedagogies, investigate its relationship with value-sensitive design pedagogies, and interactively evaluate my approach to teaching accessibility in Data Structures and Algorithms.
Biography: Kevin Lin (he/him) is an Assistant Teaching Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. He leads instructional innovation in data programming and data structures with a focus on empowering students to redesign computing problems and artifacts. Kevin received his MS in Computer Science from UC Berkeley, where he coordinated the teaching and delivery of very large-scale undergraduate CS courses to over 1,000 students per semester.
11:50-12:05: An Approach to Support Teens’ Everyday Ethical Thinking About AI, Rotem Landesman, Ph.D. Candidate, The Information School
Abstract: As teens are increasingly interacting with artificially intelligent (AI) tools and platforms, they are confronted with complex ethical dilemmas in their daily lives. Few pedagogical supports and frameworks exist to guide teens in their thinking about everyday ethics, or the ethical aspects and decisions intermingled in everyday life. Providing young people with the tools to practice and explore their everyday ethical thinking skills is important if they are to develop an understanding of ethical issues from a socio-technical perspective, as well as allowing them to manifest a space for agency in their daily lives as these technologies continue to integrate themselves in their environments.
This talk will cover some of my work exploring one approach to support teens in their everyday ethical reasoning processes about AI technologies. The suggested approach aims to support teens’ everyday ethical thinking by foregrounding (1) fostering a Community of Philosophical Inquiry (CPI) created around a specific real-world use case relevant to teens, and (2) introducing the nuances of ethical thinking by utilizing tools from the field of Philosophy for Children (P4C). I describe some of my work exploring the merits of this approach by working with several groups of teens, collaboratively writing guidelines for using AI tools in their schools.
Additionally, this talk will share preliminary insights from an ongoing project focused on pitching this approach for fostering teens’ everyday ethical thinking to high school educators in a school setting. After seeing initial promising results from working with teens, my ongoing work with educators aims to explore these pedagogical scaffoldings with teachers and collaboratively create tools and thinking practice for them to bring these nuanced ethical conversations into their classrooms.
Biography: Rotem Landesman is a Ph.D. Candidate in the Information School, exploring young people’s interactions, ethical inclinations, and norms around emerging technologies. Her research brings together perspectives and methodologies from child-computer interaction and computing education to study how adults can create meaningful support systems to cultivate youth’s ethical sensitivities, passions, and wellbeing. Rotem holds a B.A. in Communications from Reichman University and an M.A. in Philosophy and Technology from Tel Aviv University, and is driven by the tenacity and wisdom she sees youth exhibit in this fast changing technological world.
12:05-12:20: Building Sustainable K-12 Outreach Programming & Partnerships, Chloe Dolese Mandeville, Senior Assistant Director for Student Engagement & Access, and Fernanda Jardim, Senior K-12 Outreach Specialist
Abstract: The Allen School community is deeply committed to broadening participation in computing – especially among K-12 students from communities with limited access to computing education resources. As part of this commitment, the Allen School has a dedicated staff team, called the Student Engagement & Access (SEA) Team, focused on broadening participation efforts. The mission of the Student Engagement & Access Team is to attract and support the next generation of outstanding computer scientists and computer engineers who reflect the population of Washington state and the varying needs, backgrounds, and experiences of technology users around the world. K-12 Outreach is one of three pillars (along with Recruitment and Retention) that support this mission.
Throughout the year, the SEA Team engages middle school and high school students through school visits across Washington, on-campus field trips, and open houses. They also host events & activities during the National CSEd Week celebrations and host the NCWIT Aspirations in Computing Award ceremony for Eastern & Western Washington. Finally, they run a four-week summer program called the Changemakers in Computing Program for high school students to learn about technology, society, and justice.
During this talk, members of the SEA team will highlight the programs they run, share student stories, and provide insights on how they have built meaningful partnerships with communities across the state.
Safe and Reliable Computing Systems (Gates Center, Zillow Commons)
This session highlights new advances in making computing systems provably trustworthy—from DARPA’s TRACTOR effort to automatically translate unsafe C into memory-safe Rust, to cryptographic proofs of execution, side-channel verification, and network correctness. Together, these projects show how Allen School researchers are scaling formal methods to real-world software, hardware, and security challenges.
11:15-11:20: Introduction and Overview, Zachary Tatlock
11:20-11:40: Topic TBD, Michael Ernst and Gilbert Bernstein
11:40-12:00: Topic TBD, Nirvan Tyagi
12:00-12:20: Topic TBD, David Kohlbrenner
Timeslot TBD: Topic TBD, Ratul Mahajan
Lunch + Keynote Talk
Time: 12:25 – 1:25pm
Matt Golub, Causality, Learning, and Communication in the Brain (Microsoft Atrium in the Allen Center)
Topic: Causality, Learning, and Communication in the Brain
Abstract: The Systems Neuroscience and AI Lab (SNAIL) develops and applies machine learning techniques to understand how our brains drive our capacities to sense, feel, think, and act. We design computational models and analytical tools to reverse-engineer neural computation, drawing on large-scale recordings of spiking activity from populations of hundreds or thousands of neurons in the brain. In this talk, I will highlight recent and ongoing SNAIL efforts that view large neural populations as dynamical systems whose structure and activity evolve lawfully over time. I will begin by describing how this perspective has enabled us to efficiently identify causal structure in biological neural networks, which is revealed through neural responses to external perturbations that are algorithmically chosen to be maximally informative. I will then discuss deep learning approaches to dynamical systems modeling that allow us to uncover how neural population activity changes during learning and how multiple brain regions communicate with each other to support distributed computation. Together, these advances are helping to elucidate fundamental principles by which the brain organizes its dynamics to support flexible and intelligent behavior.

Biography: Matt Golub joined the University of Washington in 2022, where he is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering. Matt directs the UW Systems Neuroscience & AI Lab (SNAIL, aka Golub Lab), which focuses on research at the intersection of neuroscience, neuroengineering, machine learning, and data science. Projects in the lab design computational models, algorithms, and experiments to investigate how single-trial neural population activity drives our abilities to generate movements, make decisions, and learn from experience. Outside of research, Matt teaches undergraduate and graduate courses on machine learning and its application to neuroscience and neuroengineering. Previously, Matt was a Postdoctoral Fellow at Stanford University, where he was advised by Krishna Shenoy, Bill Newsome, and David Sussillo. His postdoctoral work advanced deep learning and dynamical systems techniques for understanding how neural population activity supports decision-making, learning, and flexible computation. This work was recognized by a K99/R00 Pathway to Independence Award from the National Institutes of Health. Matt completed his PhD at Carnegie Mellon University, where he was advised by Byron Yu and Steve Chase. There, Matt established brain-computer interfaces as a scientific paradigm for investigating the neural bases of learning and feedback motor control. His PhD dissertation received the Best Thesis Award from the Department of Electrical & Computer Engineering.
Session II
Time: 1:30 – 2:35pm (*current times listed may change)
AI for Biology and Health (Gates Center, Room 271)
The AI for Biology and Health session highlights breakthroughs at the intersection of AI and life sciences, showcasing how computational innovations are transforming our understanding of living systems. Talks range from nanopore-based single-protein sequencing and cross-modal learning for drug discovery and cellular perturbation mapping, to programmable DNA diagnostics for HIV therapy monitoring and neural network modeling of learning in the motor cortex.
1:30-1:35: Introduction and Overview, Su-In Lee
1:35-1:50: Single-molecule Nanopore Reading of Long Protein Strands, Daphne Kontogiorgos-Heintz
Abstract and biography coming soon.
1:50-2:05: CellCLIP – Learning Perturbation Effects in Cell Painting via Text-Guided Contrastive Learning, Mingyu Lu
Abstract: High-content screening (HCS) assays based on high-throughput microscopy techniques such as Cell Painting have enabled the interrogation of cells’ morphological responses to perturbations at an unprecedented scale. The collection of such data promises to facilitate a better understanding of the relationships between different perturbations and their effects on cellular state. Towards achieving this goal, recent advances in cross-modal contrastive learning could, in theory, be leveraged to learn a unified latent space that aligns perturbations with their corresponding morphological effects. However, the application of such methods to HCS data is not straightforward due to substantial differences in the semantics of Cell Painting images compared to natural images, and the difficulty of representing different classes of perturbations (e.g., small molecule vs CRISPR gene knockout) in a single latent space. In response to these challenges, here we introduce CellCLIP, a cross-modal contrastive learning framework for HCS data. CellCLIP leverages pre-trained image encoders coupled with a novel channel encoding scheme to better capture relationships between different microscopy channels in image embeddings, along with natural language encoders for representing perturbations. Our framework outperforms current open-source models, demonstrating the best performance in both cross-modal retrieval and biologically meaningful downstream tasks while also achieving significant reductions in computation time.
Biography: I am a 4th-year computer science & engineering (CSE) Ph.D. student at the University of Washington, advised by Su-In Lee in the Artificial Intelligence for Biological and Medical Sciences (AIMS) group. My research focuses on the intersection of explainable AI, particularly feature and data attribution, generative models, and treatment effect estimation. I apply these methods to improve model transparency, fairness, and safety, aiming to enhance understanding and decision-making in complex real-world settings, particularly in the biomedical domain.Previously, I worked with Li-wei Lehman, Zach Shahn, and Finale Doshi-Velez at Harvard and MIT. I have spent summers interning at research labs in academia and industry: Laboratory for Computational Physiology at MIT, and Bosch Research. I earned my MD from Kaohsiung Medical University and completed a master’s degree in Biomedical Informatics at Harvard Medical School’s Department of Biomedical Informatics.
2:05-2:20: Fast and Slow Population-level Mechanisms of Learning in the Motor Cortex, Jacob Sacks
Abstract: How do neural populations adjust activity during learning to improve behavior? Neural population activity is governed by external inputs and local recurrent dynamics. Thus, modifying inputs and local dynamics constitute two key population-level learning mechanisms. To understand how they work together to modify population activity, we studied the primary motor cortex (M1) of rhesus macaques during brain-computer interface learning experiments. We built recurrent neural network models of M1 that accurately recapitulated before-learning population activity. Then, we asked whether external input changes could sufficiently account for after-learning population activity, or if local changes were required. Within a day, we found learning is primarily input-driven, which is limited in its ability to restructure M1 activity. Between days, while the monkeys are task-disengaged, we found local changes do emerge. Altogether, these results support a two-staged learning process separated by distinct timescales, explaining constraints on rapid learning and how they are overcome over days.
Biography: I’m a postdoctoral researcher in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Dr. Matthew Golub. I’m also a NIH T32 in Theoretical and Computational Approaches to Neural Circuits of Cognition Program Fellow through the University of Washington Computational Neuroscience Center. Prior to my postdoctoral studies, I did my Ph.D. in robotics and machine learning with Dr. Byron Boots, drawing from deep learning, optimal control, and reinforcement learning. Prior to transferring to the University of Washington, I was a Ph.D. student at Georgia Tech, from which I received my M.S. degree in Electrical and Computer Engineering. And before that, I received by B.S. in Biomedical Engineering at the University of Texas at Austin.
2:20-2:35: Leveraging Molecular Programming for Smart Diagnostics, Zoe Derauf
Abstract: Adherence to antiretroviral therapy (ART) is essential for HIV treatment and prevention, yet current monitoring methods rely on centralized, instrument-intensive assays that are impractical for point-of-care use. We introduce EMERALD–an isothermal diagnostic platform that integrates reverse transcriptase (RT) inhibition with DNA strand displacement (DSD) to quantify intracellular nucleoside reverse transcriptase inhibitors (NRTIs), key pharmacologic markers of adherence. EMERALD employs a staged assay design that captures drug incorporation events, even in the presence of endogenous dNTPs, translating them into programmable fluorescence or colorimetric outputs in under 15 minutes. The system achieves nanomolar sensitivity, tunable dynamic range, and multiplexed detection of multiple NRTIs in a single reaction.
Beyond adherence monitoring, EMERALD serves as a rapid, high-throughput assay for screening new RT inhibitors and can be adapted for non-nucleoside (NNRTI) and nucleotide reverse transcriptase translocation inhibitor (NRTTI) drugs, as well as other RT or polymerase-targeting treatments. EMERALD demonstrates the potential of dynamic DNA nanotechnology to enhance molecular diagnostics, offering a versatile platform for therapeutic drug monitoring and drug development in diverse clinical and research settings.
Biography coming soon.
Robotics (Gates Center, Room 371)
This session highlights three exciting projects in robotics, including: a new type of suction cup that has the potential to drastically improve the robustness of object handling in warehouse automation, a technique that enables robots to quickly learn new tasks from simple language instructions, and an evaluation framework that provides detailed, meaningful insights into the performance of robot manipulation models.
1:30-1:35: Introduction and Overview, Dieter Fox
1:35-1:55: Smart Suction Cups with Capacitive Tactile Sensing, Alexander Choi
Abstract: In warehouse automation and industrial handling, suction cups are a simple and versatile gripping solution, but they can fail when alignment or contact quality can’t be verified. Vision systems can help, but are frequently occluded by the gripper itself or lack the precision needed for fine alignment. This talk introduces a smart suction cup that uses mutual capacitive sensing to remotely detect deformations in a standard conductive suction cup. By placing multiple electrodes on a sensing board beneath the cup, the system measures changes in mutual capacitance that reveal how the cup bends and contacts a surface. This enables robots to infer contact quality, detect misalignment, and monitor payload dynamics (such as swaying) during motion. Early prototypes show strong signal fidelity and clear sensitivity to contact geometry, suggesting a practical path toward more reliable, self-correcting suction gripping in real-world settings.
Biography: Alex is a Ph.D. student in the University of Washington ECE department, working in the Sensor Systems Lab under Joshua Smith. He previously worked as an applied scientist, developing sensing system hardware and algorithms for Amazon Go. His research focuses on robotic sensing and control, with interests in optimal control and manipulation.
1:55-2:15: ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations, Jesse Zhang
Abstract: We present ReWiND, a new framework that allows robots to learn manipulation skills directly from language instructions—without needing separate demonstrations for each task. Traditional robot learning methods depend on human-provided examples or hand-crafted reward functions, which limit scalability. ReWiND instead learns from a small set of initial demonstrations to build two key components: (1) a model that can interpret language commands and evaluate a robot’s progress, and (2) a general-purpose policy that learns from this feedback. When facing new tasks, ReWiND adapts the policy quickly using its learned reward model, needing only minimal additional data. In experiments, ReWiND generalizes to new tasks up to 2.4× better than existing methods and adapts 2–5× faster in both simulation and real-world tests. This brings us closer to robots that can flexibly learn new tasks from simple language instructions.
Biography: Jesse Zhang is a postdoc in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, advised by Professors Dieter Fox and Abhishek Gupta. His research focuses on enabling robots that can autonomously learn new tasks in new environments. He obtained his Ph.D. from the University of Southern California, where he was advised by Professors Erdem Bıyık, Joseph Lim, and Jesse Thomason.
2:15-2:35: RoboEval: Where Robotic Manipulation Meets Structured and Scalable Evaluation, Helen Wang
Abstract: As robotic systems grow more capable, how we measure their performance becomes increasingly important. Many policies can now complete manipulation tasks successfully, yet they often differ in how smoothly, precisely, or coordinately they move. Capturing these differences requires evaluation methods that look beyond simple task success. In this talk, I will introduce RoboEval, a structured evaluation framework and benchmark for robotic manipulation that integrates principled behavioral and outcome metrics. RoboEval includes eight bimanual tasks with systematically controlled variations, more than three thousand expert demonstrations, and a modular simulation platform for reproducible experimentation. Each task is instrumented with standardized metrics that quantify fluency, precision, and coordination, along with outcome measures that trace stagewise progress and reveal characteristic failure modes. Through extensive experiments with state-of-the-art visuomotor policies, we demonstrate how RoboEval enables richer, more interpretable comparisons across models and tasks. By combining structure, scalability, and behavioral insight, RoboEval takes a step toward more meaningful evaluation of robotic manipulation.
Biography: Yi Ru (Helen) Wang is a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, advised by Professors Siddhartha Srinivasa and Dieter Fox. Her research focuses on evaluation, representation, and reasoning for robotic manipulation, developing frameworks to better assess and enhance robot behavior. She is a recipient of the NSERC Postgraduate Scholarship – Doctoral (PGSD) and the CRA Outstanding Undergraduate Research Award (Honourable Mention). Prior to her Ph.D., she earned her BASc in Engineering Science (Robotics minor in AI) from the University of Toronto.
Mobile and Augmented Intelligence (Gates Center, Zillow Commons)
In a world where machines can see and speak, the next frontier is augmenting humans with AI to enable humans to have superhuman capabilities. In this session, we will describe projects that push the boundaries between science fiction and reality to build augmented AI that enhances human senses, creativity, and intelligence.
1:30-1:35: Introduction and Overview, Shyam Gollakota
1:35-1:50: Collaborative and Proactive Conversational Agents for Open-Ended Dialogue, Guilin Hu
Abstract: We introduce proactive hearing assistants that automatically identify and separate the wearer’s conversation partners, without requiring explicit prompts. Our system operates on egocentric binaural audio and uses the wearer’s self-speech as an anchor, leveraging turn-taking behavior and dialogue dynamics to infer conversational partners and suppress others. To enable real-time, on-device operation, we propose a dual-model architecture: a lightweight streaming model runs every 12.5 ms for low-latency extraction of the conversation partners, while a slower model runs less frequently to capture longer-range conversational dynamics. Results on real-world 2- and 3-speaker conversation test sets, collected with binaural egocentric hardware from 11 participants totaling 6.8 hours, show generalization in identifying and isolating conversational partners in multi-conversation settings. Our work marks a step toward hearing assistants that adapt proactively to conversational dynamics and engagement.
Biography: I’m passionate about audio and speech technologies, building models that help computers truly understand sound and interact with humans naturally. Currently, I am interested in human auditory perception and manipulation, developing real-time, streaming audio models that extend our hearing capability and enable intelligent control of our acoustic environments.I’m always open to collaboration, discussion, and opportunities to build towards this auditory future together. I can be reached at guilinhu@cs.washington.edu
1:50-2:05: TBD
2:05-2:20: TBD
2:20-2:35: TBD
Session III
Time: 2:40 – 3:45pm (*current times listed may change)
Sensing and AI for Personal Health (Gates Center, Room 271)
In this session, presenters will show how sensors and AI, embedded in smartphones, wearables, and bespoke objects, can support individuals in their personal health goals. We will see how these practical approaches are making an impact in disease screening, women’s health, and accessibility.
2:40-2:45: Introduction and Overview, Richard Li
2:45-3:00: DopFone: Doppler Based Fetal Heart Estimation Using Commodity Smartphones, Poojita Garg
Abstract: Fetal heart rate (FHR) monitoring is critical for prenatal care, yet current methods rely on expensive, specialized equipment and trained personnel, limiting accessibility in low-resource and at-home settings. We present DopFone, a novel approach leveraging the built-in speaker and microphone of commodity smartphones to non-invasively estimate FHR via Doppler-based audio sensing. Our system emits an 18 kHz low-pitched ultrasound from the smartphone speaker and analyzes reflected signals recorded by the microphone to detect abdominal surface vibrations caused by fetal cardiac activity. Combining Doppler sensing with an AdaBoost regression model (validated via Leave-One-Out Cross-Validation on 23 pregnant participants), DopFone achieved a mean absolute error of 2.1±1.3 BPM compared to a reference-standard medical Doppler device. The 95% limits of agreement (±4.90 BPM) fall well within the clinically acceptable threshold of ±8 BPM. The system demonstrated robustness across gestational ages (19-39 weeks), maternal BMI (23-67 kg/m2), and variations in phone positioning. Our results establish that smartphones can deliver clinically reliable FHR estimation without external hardware, gel, or probes, bridging the gap between clinical monitoring and accessible at-home assessment.
Biography: Poojita Garg is a 2nd-year PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, co-advised by Shwetak Patel and Vikram Iyer. Her research is situated at the intersection of ubiquitous computing and health sensing systems. Her work includes transforming off-the-shelf smartphones into fetal Doppler that functions without any external hardware, making crucial fetal health monitoring more widely available.
3:00-3:15: Designing Assistive Technologies for Improving Health Outcomes and Behaviors, Jerry Cao
Abstract: Many healthcare technologies are designed for the general demographic. While there have been some efforts to create assistive technologies that better support individuals with various disabilities, there is still room for improvement. This talk will highlight various efforts to personalize assistive technologies for improving health outcomes and behaviors, discussing two case studies on eyedrop aids and mobility aids. To improve the accessibility of eyedrop administration, we conducted a co-design study with approximately 10 participants to explore ways to enhance the experience. We produced a prototype that enables the automatic dispensing of eyedrops and utilizes computer vision to assist with aiming. To enhance the functionality and interactivity of mobility aids, we investigated the integration of various 3D-printed designs and sensors that interface with people’s phones through a participatory design study involving approximately 20 participants. Through these case studies, we demonstrate how participatory design yields novel insights into healthcare tools and how fabrication technologies can facilitate the level of personalization desired by end-users.
Biography: Jerry Cao is a 4th-year PhD student advised by Jennifer Mankoff and Shwetak Patel. He does research within the domains of health, accessibility, and digital fabrication. His work includes improving the mechanical strength of 3D-printed objects through simulation and optimization, creating innovative interactive systems utilizing 3D printing, and designing novel prototypes to enhance the accessibility of various healthcare tools/devices, such as eyedrop aids and mobility aids. Jerry is a NSF GRFP fellow and has received numerous accolades, including being a Goldwater Scholar nominee and receiving a Best Paper Honorable Mention at CHI 2025.
3:15-3:30: Engagements with Generative AI and Personal Health Informatics: Opportunities for Planning, Tracking, Reflecting, and Acting around Personal Health Data, Shaan Chopra
Abstract: Personal informatics processes require navigating distinct challenges across stages of tracking, but the range of data, goals, expertise, and context that individuals bring to self-tracking often presents barriers that undermine those processes. We investigate the potential of Generative AI (GAI) to support people across stages of pursuing self-tracking for health. We conducted interview and observation sessions with 19 participants from the United States who self-track for health, examining how they interact with GAI around their personal health data. Participants formulated and refined queries, reflected on recommendations, and abandoned queries that did not meet their needs and health goals. They further identified opportunities for GAI support across stages of self-tracking, including in deciding what data to track and how, in defining and modifying tracking plans, and in interpreting data-driven insights. We discuss GAI opportunities in accounting for a range of health goals, in providing support for self-tracking processes across planning, reflection, and action, and in consideration of limitations of embedding GAI in health self-tracking tools.
Biography: Shaan Chopra is a 5th-year PhD student in Computer Science & Engineering, advised by James Fogarty and Sean Munson. She conducts research at the intersection of human-computer interaction (HCI) and health to create inclusive technologies that help people better understand, experiment, and make decisions based on personal health data. She has studied AI in personal health informatics, needs in marginalized and stigmatized health settings, and community perspectives on health technologies. She has also interned with Merck and Parkview Health (e.g., on using multimodal patient data in clinics to support clinical understanding of enigmatic conditions such as long-COVID).
3:30-3:45: Foundation Models for Wearable Health, Girish Narayanswamy
Technologies that Sustain People and the Planet (Gates Center, Room 371)
2:40-2:45: Introduction and Overview, Vikram Iyer
2:45-3:00: Living Sustainability: In-Context Interactive Environmental Impact Communication, Alexander Le Metzger
Abstract: Climate change demands urgent action, yet understanding the environmental impact (EI) of everyday objects and activities remains challenging for the general public. While Life Cycle Assessment (LCA) offers a comprehensive framework for EI analysis, its traditional implementation requires extensive domain expertise, structured input data, and significant time investment, creating barriers for non-experts seeking real-time sustainability insights. Here we present the first autonomous sustainability assessment tool that bridges this gap by transforming unstructured natural language descriptions into in-context, interactive EI visualizations. Our approach combines language modeling and AI agents, and achieves >97% accuracy in transforming natural language into a data abstraction designed for simplified LCA modeling. The system employs a non-parametric datastore to integrate proprietary LCA databases while maintaining data source attribution and allowing personalized source management. We demonstrate through case studies that our system achieves results within 11% of traditional full LCA, while accelerating from hours of expert time to real-time. We conducted a formative elicitation study (N=6) to inform the design objectives of such EI communication augmentation tools. We implemented and deployed the tool as a Chromium browser extension and further evaluated it through a user study (N=12). This work represents a significant step toward democratizing access to environmental impact information for the general public with zero LCA expertise.
Biography: Alex is a CS master’s student at UW, working at the Ubicomp Lab on embedded ML and speech technology. His research has been published in top mathematical journals and premier CS venues (CHI, IMWUT). He has been recognized with the CRA Outstanding Undergraduate Researcher award and is currently applying for Ph.D. positions.
3:00-3:15: The Problem of the Alaskan Snow Crab: Building Species-agnostic Tools for Behavior Analysis, Moses Lurbur
Abstract: The understanding of animal behavior plays a crucial role in the development of therapeutic treatments, mitigating the impact of human intervention in the natural environment, and improving agricultural efficiency. Currently, our ability to automatically quantify, detect, and analyze animal behavior from videos is constrained by brittle, class- and domain-specific computer vision models.
Motivated by a unique environmental challenge – saving the Alaskan snow crab fishery from collapse – we are building tools for automated, species-agnostic behavior quantification and analysis. Using the latest foundation models for object detection, segmentation, pixel tracking, and image encoding, we are developing methods to detect, quantify, and classify animal behavior across domains, from fisheries science to medicine, with minimal manual annotation.
Building useful scientific tools starts with a deep understanding of the applicable domains, current limitations, scientific questions, available data, and desired impact. In our work, we have taken a bottom-up approach to tool development, closely collaborating with industry and research partners to motivate our work, build custom datasets, and inform the development of our tools.
Biography: Moses is a Ph.D. student at UW, whose research focuses on using computer vision and other machine learning techniques to improve our ability to collect and analyze data in the natural world. Prior to beginning his Ph.D., he worked at NOAA and Amazon.
3:15-3:30: On the Promise and Pitfalls of On-Device LLMs for Conservation Field Work, Cynthia Dong, ICTD
Abstract: At the heart of conservation are the field staff who study and monitor ecosystems. Recent advances in AI models raise the question of whether LLM assistants could improve the experience of data collection for these staff. However, on-device AI deployment for conservation field work poses significant challenges and is understudied. To address this gap, we conducted semi-structured interviews, surveys, and participant observation with partner conservancies in the Pacific Northwest and Namibia, identifying elements central to field-deployable conservation technology while also better understanding the field work context. To critically analyze how on-device AI would affect field work, we employ speculative methods through the frame of an on-device transcription-language model pipeline, which we built atop of EarthRanger, a widely-used, open-source conservation platform. Our findings suggest that although on-device LLMs hold some promise for field work, the infrastructure required by current on-device models clashes with the reality of resource-limited conservation settings.
Biography: Cynthia is a PhD student at UW, focused on designing and deploying systems in low-resource environments, in domains ranging from conservation to global health.
3:30-3:45: TBD
Human-AI Interaction in Health (Gates Center, Zillow Commons)
This session investigates opportunities and challenges for human-AI interaction within personal health contexts. Research talks will discuss their potential for personalized coaching, challenges and solutions associated with trustworthy data integration and interpretation, and discuss psychosocial safety to ensure responsible development and use of this technology.
2:40-2:45: Introduction and Overview, Tim Althoff
2:45-3:00: Collaborative and Proactive Conversational Agents for Open-Ended Dialogue, Vidya Srinivas
Abstract: As large language models (LLMs) become increasingly embedded in interactive applications, evaluation has largely centered on task performance, typically measured via benchmark datasets, expert annotations, or automated model-based raters. While valuable, these approaches assess conversations from a detached, third-party perspective, often overlooking a critical dimension of real-world deployment: how interaction style shapes user engagement and the quality of contextual input. Without meaningful engagement, users are unlikely to remain with a conversational system long enough for it to have a real impact. In this work, we argue that effective LLM-driven dialogue systems must do more than generate accurate or fluent responses—they must also elicit richer user input through natural, collaborative conversational strategies. To support this, we introduce an explicit control-flow framework in which LLMs think before they speak, crafting responses in a latency-aware manner. We design and evaluate five distinct dialogue styles informed by expert design principles, and assess them using a multi-dimensional framework that includes expert judgments, automated metrics, and first-person user feedback. Our findings show that collaborative conversational agents not only increase user engagement but also help gather higher-quality contextual input, ultimately improving both user satisfaction and task outcomes.
Biography: Vidya Srinivas is a third-year Ph.D. student at the Paul G. Allen School advised by Shwetak Patel. She is a member of the Ubiquitous Computing Lab, where her research spans deep learning, conversational AI, and AI for health. Before joining the University of Washington, Vidya obtained her bachelor’s degree in computer engineering from the University of Michigan. Her research interests lie in creating human-centered systems and context-aware systems that can run within the compute and memory constraints of wearable and resource-constrained devices.
3:15-3:30: Topic TBD, Ken Gu
3:30-3:45: Topic TBD, Deniz Nazarova
Session IV
Time: 3:50 – 4:55pm (*current times listed may change)
Deep Industry Collaborations (Gates Center, Room 271}
This session highlights the rich collaborations between the Allen School and industry partners, showcasing the depth and impact of these partnerships. To illustrate the breadth of engagement, we plan to feature talks from a startup, a research initiative, and a large company as representative examples.
3:50-3:55: Introduction and Overview, Shwetak Patel and Joshua Smith
3:55-4:10: AI2/OLMo Example
4:10-4:25: Proprio: The NewWay ofSeeing, Gabriel Jones
Abstract coming soon.
Biography: Gabriel Jones is the CEO of Proprio, which he co-founded in 2016 with Seattle Children’s Hospital neurosurgeon Dr. Samuel Browd, University of Washington Professor Joshua Smith, and computer vision specialist James Youngquist. The team set out to create a system that makes surgeons more precise and exponentially increases surgical accuracy and efficiency.
Jones is a well-known and experienced technology leader with a passion for helping others and bringing innovation to life. Jones has more than a decade of leadership experience in emerging technology and intellectual property law, with specific expertise in AI and healthcare. He began his career in international trade in Japan and Washington, DC, and worked in large-scale mergers and acquisitions on Wall Street. Prior to founding Proprio, Jones helped clients like Bill Gates and the leadership of Microsoft evaluate and develop emerging technologies while working with the Bill & Melinda Gates Foundation on global initiatives in health and technology. Seeking to create a more direct, positive societal impact, his work led him to technology ventures. He has assisted several successful startups on their commercialization path, from biotech and med tech, to AI and prop tech. Gabe has held seats on several boards including the Washington Biotech and Biomedical Association and the Group Health Foundation through its acquisition by Kaiser.
4:25-4:40: Foundation Models for Wearable Health, Girish Narayanswamy
Abstract: Wearable sensors have become ubiquitous thanks to a variety of health tracking features. The resulting continuous and longitudinal measurements from everyday life generate large volumes of data; however, making sense of these observations for scientific and actionable insights is non-trivial. Inspired by the empirical success of generative modeling, where large neural networks learn powerful representations from vast amounts of text, image, video, or audio data, we investigate the development of sensor foundation models for wearable health data. Using a dataset of up to 40 million hours of in-situ pulse, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute sensor data from over 165,000 people, we create LSM (Large Sensor Model), a multimodal foundation model built on the largest wearable-signals dataset with the most extensive range of sensor modalities to date. This work, a collaboration between University of Washington and Google researchers establishes the utility of such generalist models for wearable health and explores its applications from activity recognition, and hypertension risk, to insulin resistance, and health forecasting.
Biography: Girish is a final year PhD Student at the University of Washington Ubiquitous Computing Lab, where he is advised by Professor Shwetak Patel. Girish’s research focuses on novel Learning (ML/AI) and Time-Series Modeling methodologies. He is particularly interested in applications which improve health sensing and expand health access.
4:40-4:55: TBD
Graphics and Vision (Gates Center, Room 371 )
This session will showcase four projects in computer graphics and computer vision from multiple lab groups in the Allen School. Topics will vary from gigapixel image generation to solving programmatic challenges in machine knitting.
3:50-3:55: Introduction and Overview, Brian Curless
3:55-4:07: Video Tokenization via Panoptic Sub Object Trajectories, Chenhao Zheng
Abstract: Current approaches tokenize videos using space-time patches, leading to excessive tokens and computational inefficiencies. The best token reduction strategies degrade performance and barely reduce the number of tokens when the camera moves. We introduce grounded video tokenization, a paradigm that organizes tokens based on panoptic sub-object trajectories rather than fixed patches. Our method aligns with fundamental perceptual principles, ensuring that tokenization reflects scene complexity rather than video duration. We propose TrajViT, a video encoder that extracts object trajectories and converts them into semantically meaningful tokens, significantly reducing redundancy while maintaining temporal coherence. Trained with contrastive learning, TrajViT significantly outperforms space-time ViT (ViT3D) across multiple video understanding benchmarks, e.g., TrajViT outperforms ViT3D by a large margin of 6% top-5 recall in average at video-text retrieval task with 10x token deduction. We also show TrajViT as a stronger model than ViT3D for being the video encoder for modern VideoLLM, obtaining an average of 5.2% performance improvement across 6 VideoQA benchmarks while having 4x faster training time and 18x less inference FLOPs. TrajViT is the first efficient encoder to consistently outperform ViT3D across diverse video analysis tasks, making it a robust and scalable solution.
Biography: Chenhao is a second-year CS Ph.D. student at the University of Washington, advised by Prof. Ranjay Krishna. He is also a student researcher at the Allen Institute of AI. Before that, he was an undergraduate student at the University of Michigan and Shanghai Jiao Tong University, where he worked with Prof. Andrew Owens and Dr. Adam Harley. He works on computer vision, with a particular interest in videos (video understanding, video->3d, video->robotics) and large vision-language models.
4:07-4:19: Untangling Machine Knitted Objects, Nat Hurtig
Abstract: Machine knitting is a staple technology of modern textile production where hundreds of mechanical needles are manipulated to form yarn into interlocking loop structures. Manually programming these machines to form knitted objects can be difficult and error-prone, and compilers for high-level tools are limited. We define formal semantics for machine knitting programs using category theory. We present an algorithm that canonicalizes the knitted objects that programs make, untangling topological equivalence in polynomial time.
Biography: Nat is a 2nd year PhD student advised by Gilbert Bernstein and Adriana Schulz, working on machine knitting and computer-aided design. His focus is on adapting ideas and tools from programming languages research to problems in design and fabrication.
4:19-4:31: HoloGarment: 360-Degree Novel View Synthesis of In-the-Wild Garments, Johanna Karras
Abstract: HoloGarment (Hologram-Garment) is a method for generating canonical, static 360-degree garment visualizations from 1-3 images or even a dynamic video of a person wearing a garment. This is a challenging task, due to (1) frequent occlusions, non-rigid deformations, and significant pose variations present in real-world garments, and (2) the scarcity of ground-truth paired image-3D data. Our key insight is to bridge the domain gap between real and synthetic data with a novel implicit training paradigm leveraging a combination of large-scale real video data and small-scale synthetic 3D data to optimize a shared garment embedding space. During inference, the shared embedding space further enables the construction of a garment “atlas” representation by finetuning a garment embedding on a specific real-world video. The atlas captures garment-specific geometry and texture across all viewpoints, independent of body pose or motion. Extensive experiments show that HoloGarment achieves state-of-the-art performance on NVS of in-the-wild garments from images and videos, even in the presence of challenging real-world artifacts.
Biography: Johanna is a fifth-year PhD student in the Graphics and Imaging Lab (GRAIL) at UW, co-advised by Prof. Ira Kemelmacher-Shlizerman and Prof. Steve Seitz. During her PhD, she has focused on generative AI for image and video synthesis, with an emphasis on conditional generation of photorealistic humans and garments. In addition to her PhD, she also collaborates with the Google virtual try-on team as a student researcher.
4:31-4:43: UltraZoom: Generating Gigapixel Images from Regular Photos, Jingwei Ma
Abstract: UltraZoom generates gigapixel-resolution images of everyday objects from casual phone captures. Given a full view and a few close-ups, it reconstructs fine detail across the object. The system combines scale registration, degradation alignment, and generative modeling to produce consistent gigapixel images, enabling continuous zoom from full object to close-up detail.
Biography: Jingwei Ma is a fifth-year Ph.D. student at the University of Washington’s Graphics and Imaging Laboratory (GRAIL), advised by Professors Ira Kemelmacher-Shlizerman, Steve Seitz, and Brian Curless. Her research focuses on generative models for images and videos, and 3D reconstruction.
4:43-4:55: TBD
Transparent and Equally Beneficial AI (Gates Center, Zillow Commons)
3:50-3:55: Introduction and Overview, Yulia Tsvetkov
3:55-4:15: Human-Agent Collaboration: a Grand Challenge in the Agentic AI Era, Kevin Feng
Abstract: AI agents that can reason, plan, and take actions over extended time horizons have gained considerable excitement as a paradigm that can usher in a new generation of useful AI applications. However, agentic systems’ real-world success depends not only on their inherent capabilities, but also their ability to effectively collaborate with humans. In this talk, I first offer a 5-level framework for thinking about autonomy in AI agents, characterized by the roles a user can take when interacting with an agent. I argue that, rather than designing agents to operate at a specific level, the community should enable agents to move fluidly and appropriately across levels. I then introduce agent notebooks as an interaction technique for doing so. I share Cocoa, an agentic system for scientific researchers that implements agent notebooks, which improves agent steerability over chat-based baselines. I conclude with promising directions for future research in human-agent collaboration.
Biography: Kevin Feng is a 5th- and final-year PhD student at UW, co-advised by Amy X. Zhang (CSE) and David W. McDonald (HCDE). His research contributes interactive systems and techniques to translate capable models into useful AI-powered applications in the real world. His work is supported by an OpenAI Democratic Inputs to AI grant (2023) and a UW Herbold Fellowship (2022). He has interned at the Allen Institute for AI (Ai2), Microsoft Research, Microsoft Azure Machine Learning, and was a summer fellow at the Centre for the Governance of AI (GovAI).
4:15-4:35: TBD
Evening
Time: 5:00 – 7:00pm
Event: Open House: Reception + Poster Session
Location: Microsoft Atrium in the Allen Center
Time: 7:15 – 7:30pm
Event: Madrona Prize, People’s Choice Awards
Location: Microsoft Atrium in the Allen Center