Skip to content

Artificial Intelligence

Allen School researchers are at the forefront of exciting developments in AI spanning machine learning, computer vision, natural language processing, robotics and more.

We cultivate a deeper understanding of the science and potential impact of rapidly evolving technologies, such as large language models and generative AI, while developing practical tools for their ethical and responsible application in a variety of domains — from biomedical research and disaster response, to autonomous vehicles and urban planning.


Faculty Members

Accessible Accordion

Areas of Expertise

Societal Impact

Results will appear in alphabetical order.

Faculty

Adjunct Faculty

Adjunct Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Emeritus Faculty

Emeritus Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Faculty

Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Adjunct Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Faculty

Adjunct Faculty

Faculty

Emeritus Faculty

Faculty

Highlights


UW News

In a paper published in the journal Nature, a team of Allen School and Ai2 researchers unveiled OpenScholar, a system that can cite scientific papers as accurately as human experts and incorporate new research after it has been trained.

Allen School News

The Institute of Electrical and Electronics Engineers (IEEE) recognized Kemelmacher-Shlizerman for her “contributions to face, body, and clothing modeling from large image collections,” including pioneering virtual try-on tools and bringing the technology to the mainstream.

Allen School News

A team of Allen School and Ai2 researchers were recognized for developing an efficient, scalable system for indexing petabyte-level text corpora with minimal storage overhead to better understand the data on which large language models are trained.