Capri: Achieving Predictable Performance in Cloud Spot Markets
Bogdan Ghit, Asser Tantawi
MASCOTS 2021
Multi-agent reinforcement learning (MARL) has primarily focused on solving a single task in isolation, while in practice the environment is often evolving, leaving many related tasks to be solved. In this paper, we investigate the benefits of meta-learning in solving multiple MARL tasks collectively. We establish the first line of theoretical results for meta-learning in a wide range of fundamental MARL settings, including learning Nash equilibria in two-player zero-sum Markov games and Markov potential games, as well as learning coarse correlated equilibria in general-sum Markov games. Under natural notions of task similarity, we show that meta-learning achieves provable sharper convergence to various game-theoretical solution concepts than learning each task separately. As an important building block, we develop multiple MARL algorithms with initialization-dependent convergence guarantees. Such algorithms integrate optimistic policy mirror descents with stage-based value updates, and their refined convergence guarantees (nearly) match the best existing results even when a good initialization is unknown. To our best knowledge, such results are also new and might be of independent interest. We further provide numerical simulations to corroborate our theoretical findings.
Bogdan Ghit, Asser Tantawi
MASCOTS 2021
Alan Cha, Erik Wittern, et al.
ESEC/FSE 2020
Yuankai Luo, Veronika Thost, et al.
NeurIPS 2023
Stephen Obonyo, Isaiah Onando Mulang’, et al.
NeurIPS 2023