Fan Zhang, Junwei Cao, et al.
IEEE TETC
Evaluating who an explanation serves well and when is sometimes done empirically using prediction tasks-but measuring a user's success at predicting an AI's behavior in traditional ways can produce misleading empirical results. Koujalgi et al. recently proposed measurements to solve this problem, but they have not yet been tried as a way of answering “who” and “when” questions. In this paper, we show how we used one of these measurements, “Loss in Value”, in the context of an XAI study with 69 participants, to learn who our explanations were serving well, who they were not serving well, and when. Our results showed that Loss in Value uncovered “who” and “when” differences that traditional measurements were unable to reveal.
Fan Zhang, Junwei Cao, et al.
IEEE TETC
David S. Kung
DAC 1998
Rajeev Gupta, Shourya Roy, et al.
ICAC 2006
Kafai Lai, Alan E. Rosenbluth, et al.
SPIE Advanced Lithography 2007