Shengwei An, Sheng-Yen Chou, et al.
AAAI 2024
Knowledge distillation is a popular approach to compress high-performance neural networks for use in resource-constrained environments. However, the threat of adversarial machine learning poses the question: Is it possible to compress adversarially robust networks and achieve similar or better adversarial robustness as the original network? In this paper, we explore this question with respect to certifiable robustness defenses, in which the defense establishes a formal robustness guarantee irrespective of the adversarial attack methodology. We present our preliminary findings answering two main questions: 1) Is the traditional knowledge distillation sufficient to compress certifiably robust neural networks? and 2) What aspects of the transfer process can we modify to improve the compression effectiveness? Our work represents the first study of the interaction between machine learning model compression and certifiable robustness
Shengwei An, Sheng-Yen Chou, et al.
AAAI 2024
Bo Zhao, Nima Dehmamy, et al.
NeurIPS 2022
Ben Huh, Avinash Baidya
NeurIPS 2022
Hongyu Tu, Shantam Shorewala, et al.
NeurIPS 2022