Chanakya Ekbote, Moksh Jain, et al.
NeurIPS 2022
In this work, we study model heterogeneous Federated Learning (FL) for classification where different clients have different model architectures. Unlike existing works on model heterogeneity, we neither require access to a public dataset nor do we impose constraints on the model architecture of clients and ensure that the clients' models and data are private. We prove a generalization result, that provides fundamental insights into the role of the representations in FL and propose a theoretically grounded algorithm \textbf{Fed}erated \textbf{C}onditional \textbf{M}oment \textbf{A}lignment (\pap) that aligns class conditional distributions of each client in the feature space.
Chanakya Ekbote, Moksh Jain, et al.
NeurIPS 2022
Shiqiang Wang, Nathalie Baracaldo Angel, et al.
NeurIPS 2022
Zhuqing Liu, Xin Zhang, et al.
ICML 2023
Amit Alfassy, Assaf Arbelle, et al.
NeurIPS 2022