Amit Anil Nanavati, Nitendra Rajput, et al.
MobileHCI 2011
Recent years have seen a surge of interest in the field of explainable AI (XAI), with a plethora of algorithms proposed in the literature. However, a lack of consensus on how to evaluate XAI hinders the advancement of the field. We highlight that XAI is not a monolithic set of technologies—researchers and practitioners have begun to leverage XAI algorithms to build XAI systems that serve different usage contexts, such as model debugging and decision-support. Algorithmic research of XAI, however, often does not account for these diverse downstream usage contexts, resulting in limited effectiveness or even unintended consequences for actual users, as well as difficulties for practitioners to make technical choices. We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts. Towards this goal, we introduce a perspective of contextualized XAI evaluation by considering the relative importance of XAI evaluation criteria for prototypical usage contexts of XAI. To explore the context dependency of XAIe valuation criteria, we conduct two survey studies, one with XAI topical experts and another with crowd workers. Our result surge for responsible AI research with usage-in forme devaluation practices, and provide a nuanced understanding of user requirements for XAI in different usage contexts.
Amit Anil Nanavati, Nitendra Rajput, et al.
MobileHCI 2011
Amol Thakkar, Andrea Antonia Byekwaso, et al.
ACS Fall 2022
Dimitrios Christofidellis, Giorgio Giannone, et al.
MRS Spring Meeting 2023
Carla F. Griggio, Mayra D. Barrera Machuca, et al.
CSCW 2024