Cost-Aware Counterfactuals for Black Box Explanations
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Feature attribution methods explain black-box machine learning (ML) models by assigning importance scores to input features. These methods can be computationally expensive for large ML models. To address this challenge, there has been increasing efforts to develop amortized explainers, where a machine learning model is trained to predict feature attribution scores with only one inference. Despite their efficiency, amortized explainers can produce inaccurate predictions and misleading explanations. In this paper, we propose selective explanations, a novel feature attribution method that (i) detects when amortized explainers generate low-quality explanations and (ii) improves these explanations using a technique called explanations with initial guess. Our selective explanation method allows practitioners to specify the fraction of samples that receive explanations with initial guess, offering a principled way to bridge the gap between amortized explainers (one inference) and their high-quality counterparts (multiple inferences).
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Kristjan Greenewald, Yuancheng Yu, et al.
NeurIPS 2024
Dennis Wei, Alfred O. Hero
IEEE Trans. Inf. Theory
Michael Hersche, Francesco Di Stefano, et al.
NeurIPS 2023