Skip to content

Allen School Alumni

Alumni News

The Allen School’s Alumni News features stories and updates that celebrate the achievements and experiences of our alumni.

Mahajan (Ph.D., ‘05) was recognized for his work on Batfish, an open source network configuration analysis tool that helps find errors and prevent costly outages that could disrupt air travel, banking, communications and more.

Asai (Ph.D., ‘25), research scientist at Ai2 and incoming faculty at Carnegie Mellon University, was recognized for her pioneering research that has helped establish the foundations for retrieval-augmented generation (RAG) and showcase its effectiveness at reducing LLM hallucinations.

Allen School researchers earned multiple awards at the 63rd Annual Meeting of the Association for Computational Linguistics for laying the foundation for how AI systems understand and follow human instructions, exploring how LLMs pull responses from their training data, and more.

To help Reddit moderators make data-driven decisions on what rules are best for their community, a team of researchers in the Allen School’s Behavioral Data Science Group and Social Futures Lab conducted the largest-to-date analysis of over 67,000 Reddit rules and their evolution.

There has been a lot of chatter about new computer science graduates and employment in the AI era. Allen School professors Magdalena Balazinska and Dan Grossman examine the myths and realities surrounding AI and the prospects for current and future Allen School majors.

In 2011, a team of researchers that included Allen School professor and alum Franziska Roesner published a paper detailing how they could remotely take control of a car. Their work, which inspired new motor vehicle security standards, received the USENIX Test of Time Award.

Aatish Parson (B.S., ‘25) has hit a few bumps in the road since graduation—all the better for building the CivicScan app that uses artificial intelligence to detect potholes and other issues to smooth the way for improved road maintenance.

In a recent paper, a team of researchers led by professor Matt Golub designed a new machine learning technique to understand how different parts of the brain talk to each other even when some parts can’t be directly observed.

The Association for Computational Linguistics honored Min (Ph.D., ‘24) for her dissertation “Rethinking Data Use in Large Language Models,” which expanded the natural language processing community’s understanding of the behavior and capabilities of LLMs.

Ye, who graduated in June with degrees in computer science and philosophy, was recognized by the College of Arts & Sciences for his campus leadership and interdisciplinary research contributions spanning language models, computer vision, human-AI interaction and more.