Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Traditional machine learning models focus on achieving good performance on the overall training distribution, but they often underperform on minority groups. Existing methods can improve the worst-group performance, but they can have several limitations: (i) they require group annotations, which are often expensive and sometimes infeasible to obtain, and/or (ii) they are sensitive to outliers. Most related works fail to solve these two issues simultaneously as they focus on conflicting perspectives of minority groups and outliers. We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters. We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art both in terms of downstream worst-group performance.
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024
Kristjan Greenewald, Yuancheng Yu, et al.
NeurIPS 2024