Faculty

Computational linguistics (especially grammar engineering and NLP), syntax and study of variation.

Machine learning, speech/language/bioinformatics/music, submodularity & discrete optimization









Postdocs

Qingqing Cao's research interests include efficient NLP, mobile computing, and machine learning systems. My current focus is building efficient and practical NLP systems for diverse platforms including resource-constrained edge devices and the cloud servers. Previously, I have worked on projects like faster Transformer models for question answering (ACL 2020), as well as accurate and interpretable energy estimation of NLP models (ACL 2021).

Tianxing He works with Yulia Tsvetkov on natural language generation. He works towards a better understanding of how the current large language models work. Related, he is interested in monitoring and detecting different behaviors of language models under different scenarios, and approaches to fix undesirable behaviors.

Fatemehsadat Mireshghallah works with Allen School professors Yejin Choi and Yulia Tsvetkov. Her research interests are Trustworthy Machine Learning and Natural Language Processing. She is a recipient of the National Center for Women & IT (NCWIT) Collegiate award in 2020, a finalist of the Qualcomm Innovation Fellowship in 2021 and a recipient of the 2022 Rising star in Adversarial ML award. She received her Ph.D. from the CSE department of UC San Diego in 2023.

Michael Regan works with professor Yejin Choi studying cognitive semantic approaches to the modeling of causal structure in language. His research focuses on creating datasets for model evaluation across a variety of tasks (e.g., schema induction, causal reasoning), problems in event, narrative, and dialogue understanding, and functional approaches to studies of language which hold that physical models of the world shape both human cognition (e.g., intuitive physics) and linguistic constructions, form-meaning pairs where meaning is assumed to be causal. He holds a Ph.D. from the University of New Mexico.

Sean Welleck is a Postdoctoral Scholar at UW, working with Yejin Choi. His research interests include deep learning and structured prediction, with applications in natural language processing. He completed his Ph.D. at New York University, advised by Kyunghyun Cho and Zheng Zhang, and obtained B.S.E. and M.S.E. degrees from the University of Pennsylvania. His research has been published at ICML, NeurIPS, ICLR, and ACL, including two Nvidia AI Labs Pioneering Research Awards.

Tao Yu works with professor Noah Smith and Mari Ostendorf of the UW Natural Language Processing group. His research focuses on designing and building conversational natural language interfaces (NLIs) that can help humans explore and reason over data in any application (e.g., relational databases and mobile apps) in a robust and trusted manner. Tao completed his Ph.D. from Yale University.
Collaborators