Title: Model Agnostic Explanations for Machine Learning Models

Advisor: Carlos Guestrin

Supervisory Committee: Carlos Guestrin (Chair), Jevin West (GSR, iSchool), Sameer Sing (UC Irvine), Dan Weld, and Jeff Heer

Abstract: Despite many successes, complex machine learning systems are limited in their impact due to several issues regarding communication with humans: they are functionally black boxes, hard to debug and hard to evaluate properly. 

This communication is crucial though: humans are the ones who train, deploy and use machine learning models, and thus have to make trust and evaluation decisions. Furthermore, it is humans who try to improve these models, and having an understanding of their behavior is very valuable for this purpose.
This dissertation addresses this communication problem by presenting model agnostic explanations and evaluation, which improve the interaction between humans and any machine learning model.
 
Specifically, we present: (1) Local Interpretable Model Agnostic Explanations (LIME), an explanation technique that can explain any black box model by approximating it locally with a linear model, (2) Anchors, model-agnostic explanations that represent sufficient conditions for predictions, (3) Semantically Equivalent Adversaries and Adversarial Rules (SEAs and SEARs), semantic-preserving perturbations and rules that unearth brittleness bugs in text models, and (4) Implication Consistency, a new kind of evaluation metric that considers the relationship between model outputs in order to measure higher level thinking. 
We demonstrate that these contributions enable efficient communication between machine learning models and humans, empowering humans to better evaluate, improve, and assess trust in models.
 
Place: 
CSE 403
When: 
Wednesday, September 5, 2018 - 13:00 to Thursday, March 28, 2024 - 01:06