Akhilan Boopathy, Sijia Liu, et al.
ICML 2020
Recently, quantum classifiers have been known to be vulnerable to adversarial attacks, where quantum classifiers are fooled by imperceptible noises to have misclassification. In this paper, we propose one first theoretical study that utilizing the added quantum random rotation noise can improve the robustness of quantum classifiers against adversarial attacks. We connect the definition of differential privacy and demonstrate the quantum classifier trained with the natural presence of additive noise is differentially private. Lastly, we derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples supported by experimental results.
Akhilan Boopathy, Sijia Liu, et al.
ICML 2020
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
Gururaj Saileshwar, Prashant J. Nair, et al.
HPCA 2018
Kristjan Greenewald, Yuancheng Yu, et al.
NeurIPS 2024