Lizzie Kumar, 2019-2020 Utah Noel de Nevers Memorial Scholar, recently presented two new papers on explainable artificial intelligence at the virtual 37th International Conference on Machine Learning.
Lizzie’s research builds on explanation methods developed to help humans interpret data-driven AI algorithms. Researchers are beginning to use these explanations to utilize machine intelligence for prediction tasks like anticipating complications during surgery (https://www.nature.com/articles/s41551-018-0304-0), assessing risk in finance and insurance (https://arxiv.org/abs/1909.06342), and even to screen for COVID-19 (https://www.medrxiv.org/content/10.1101/2020.05.07.20093948v2).
In her team’s ICML paper (https://iekumar.com/shapley-value-problems/), as well as related work (https://iekumar.com/shapley-residuals/) presented at the concurrent Workshop on Human Interpretability (WHI), Lizzie and her coauthors describe and quantify the kind of information that explanation methods capture—and don’t capture—about AI predictions. This additional step of analysis may help AI researchers users more effectively utilize explanations to understand the structure of predictive algorithms as they develop them.
Lizzie additionally served on a live panel at WHI, where she discussed the future of interpretability research with experts from Harvard, Stanford, and the University of Massachusetts.