Debarun Bhattacharjya, Karthikeyan Shanmugam, et al.
NeurIPS 2020
We consider a covariate shift problem where one has access to several different training datasets for the same learning problem and a small validation set which possibly differs from all the individual training distributions. The distribution shift is due, in part, to \emph{unobserved} features in the datasets. The objective, then, is to find the best mixture distribution over the training datasets (with only observed features) such that training a learning algorithm using this mixture has the best validation performance. Our proposed algorithm, \textsf{Mix&Match}, combines stochastic gradient descent (SGD) with optimistic tree search and model re-use (evolving partially trained models with samples from different mixture distributions) over the space of mixtures, for this task. We prove a novel high probability bound on the final SGD iterate without relying on a global gradient norm bound, and use it to show the advantages of model re-use. Additionally, we provide simple regret guarantees for our algorithm with respect to recovering the optimal mixture, given a total budget of SGD evaluations. Finally, we validate our algorithm on two real-world datasets.
Debarun Bhattacharjya, Karthikeyan Shanmugam, et al.
NeurIPS 2020
Wang Zhou, Levente Klein
NeurIPS 2020
Etienne Eben Vos, Ashley Daniel Gritzman, et al.
NeurIPS 2020
Georgios Damaskinos, Celestine Mendler-Dünner, et al.
NeurIPS 2020