Title: Learning to Compile Learning
Advisor: Zach Tatlock
Supervisory Committee: Zach Tatlock (Chair), Eric Klavins (GSR, ECE), Luis Ceze, and Carlos Guestrin
Frameworks for writing, compiling, and optimizing deep learning (DL) models have recently enabled progress in areas like computer vision and natural language processing. Extending these frameworks to accommodate the rapidly diversifying landscape of DL models and hardware platforms presents challenging tradeoffs between expressiveness, composability, and portability. I propose a design for an intelligent end-to-end deep learning compiler that can automatically adapt high-level programs to run on new hardware devices with human level performance. We demonstrate that a shared compiler and IR are useful infrastructure for frameworks and tools, including tools built at University of Washington (VTA) as well as Amazon (MxNet), Facebook (PyTorch) and other companies.