Jinghan Huang, Jiaqi Lou, et al.
ISCA 2024
Existing federated learning models that follow the standard risk minimization paradigm of machine learning often fail to generalize in the presence of spurious correlations in the training data. In many real-world distributed settings, spurious correlations exist due to biases and data sampling issues on distributed devices or clients that can erroneously influence models. Current generalization approaches are designed for centralized training and attempt to identify features that have an invariant causal relationship with the target, thereby reducing the effect of spurious features. However, such invariant risk minimization approaches rely on apriori knowledge of training data distributions which is hard to obtain in many applications. In this work, we present a generalizable federated learning framework called FedGen, which allows clients to identify and distinguish between spurious and invariant features in a collaborative manner without prior knowledge of training distributions. We evaluate our approach on real-world datasets from different domains and show that FedGen results in models that achieve significantly better generalization and can outperform the accuracy of current federated learning approaches by over 24%.
Jinghan Huang, Jiaqi Lou, et al.
ISCA 2024
Ilias Iliadis
International Journal On Advances In Networks And Services
Jayaram Kr Kallapalayam Radhakrishnan, Vinod Muthusamy, et al.
Middleware 2019
Olivier Tardieu, Abhishek Malvankar
K8SAIHPCDAY 2023