Adversarial Attacks on Fairness of Graph Neural Networks
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
We propose a new approach for propagating stable probability distributions through neural networks. Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity. This allows propagating Gaussian and Cauchy input uncertainties through neural networks to quantify their output uncertainties. To demonstrate the utility of propagating distributions, we apply the proposed method to predicting calibrated confidence intervals and selective prediction on out-of-distribution data. The results demonstrate a broad applicability of propagating distributions and show the advantages of our method over other approaches such as moment matching.
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Yuan Gong, Hongyin Luo, et al.
ICLR 2024
Michael Hersche, Francesco Di Stefano, et al.
NeurIPS 2023