CSE2 313
Data Science, Data Mining, Social Network Analysis, Natural Language Processing
Adjunct, Linguistics

Computational linguistics (especially grammar engineering and NLP), syntax and study of variation.

Adjunct, Electrical & Computer Engineering

Machine learning, speech/language/bioinformatics/music, submodularity & discrete optimization

Adjunct, Information School
Artificial intelligence, AI ethics, algorithmic bias, computer vision, data science, implicit machine cognition, machine learning, natural language processing
CSE 578
Natural language processing
CSE 470
Natural Language Processing, Artificial Intelligence, Machine Learning
Development of new representations, models, and training paradigms for machine learning and computer vision, drawing on insights from human-computer interaction, social, and behavioral sciences
Natural language processing
CSE 566
Natural language processing
Adjunct, Information School
Health informatics, natural language processing, data science
Computational biology — learning in the open-world setting, biomedical natural language processing, network biology



Qingqing Cao's research interests include efficient NLP, mobile computing, and machine learning systems. My current focus is building efficient and practical NLP systems for diverse platforms including resource-constrained edge devices and the cloud servers. Previously, I have worked on projects like faster Transformer models for question answering (ACL 2020), as well as accurate and interpretable energy estimation of NLP models (ACL 2021).


Tianxing He works with Yulia Tsvetkov on natural language generation. He works towards a better understanding of how the current large language models work. Related, he is interested in monitoring and detecting different behaviors of language models under different scenarios, and approaches to fix undesirable behaviors.

Michael Regan

Michael Regan works with professor Yejin Choi studying cognitive semantic approaches to the modeling of causal structure in language. His research focuses on creating datasets for model evaluation across a variety of tasks (e.g., schema induction, causal reasoning), problems in event, narrative, and dialogue understanding, and functional approaches to studies of language which hold that physical models of the world shape both human cognition (e.g., intuitive physics) and linguistic constructions, form-meaning pairs where meaning is assumed to be causal. He holds a Ph.D. from the University of New Mexico.

Tianxiao Shen

Wenya is a research fellow in Paul G. Allen School of Computer Science and Engineering at the University of Washington, advised by Noah Smith and Hanna Hajishirzi. Her research interests lie in deep learning, logic reasoning and their applications in natural language processing, e.g., information extraction, natural language understanding etc.


Sean Welleck is a Postdoctoral Scholar at UW, working with Yejin Choi. His research interests include deep learning and structured prediction, with applications in natural language processing. He completed his Ph.D. at New York University, advised by Kyunghyun Cho and Zheng Zhang, and obtained B.S.E. and M.S.E. degrees from the University of Pennsylvania. His research has been published at ICML, NeurIPS, ICLR, and ACL, including two Nvidia AI Labs Pioneering Research Awards.


Tao Yu works with professor Noah Smith and Mari Ostendorf of the UW Natural Language Processing group. His research focuses on designing and building conversational natural language interfaces (NLIs) that can help humans explore and reason over data in any application (e.g., relational databases and mobile apps) in a robust and trusted manner. Tao completed his Ph.D. from Yale University.