Kaizhi Qian, Yang Zhang, et al.
ICML 2020
Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with a proper measurement of interpretation, it is actually difficult to prevent prediction-evasion adversarial attacks from causing interpretation discrepancy, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop an interpretability-aware defensive scheme built only on promoting robust interpretation (without the need for resorting to adversarial loss minimization). We show that our defense achieves both robust classification and robust interpretation, outperforming state-of-theart adversarial training methods against attacks of large perturbation in particular.
Kaizhi Qian, Yang Zhang, et al.
ICML 2020
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
Gururaj Saileshwar, Prashant J. Nair, et al.
HPCA 2018
Kristjan Greenewald, Yuancheng Yu, et al.
NeurIPS 2024