Shiqiang Wang, Nathalie Baracaldo Angel, et al.
NeurIPS 2022
Mathematical decision-optimization (DO) models provide decision support in a wide range of scenarios. Often, hard-to-model constraints and objectives are learned from data. Learning, however, can give rise to DO models that fail to capture the real system, leading to poor recommendations. We introduce an open-source framework designed for large-scale testing and solution quality analysis of DO model learning algorithms. Our framework produces multiple optimization problems at random, feeds them to the user's algorithm and collects its predicted optima. By comparing predictions against the ground truth, our framework delivers a comprehensive prediction profile of the algorithm. Thus, it provides a playground for researchers and data scientists to develop, test, and tune their DO model learning algorithms. Our contributions include: (1) an open-source testing framework implementation, (2) a novel way to generate DO ground truth, and (3) a first-of-its-kind, generic, cloud-distributed Ray and Rayvens architecture. We demonstrate the use of our testing framework on two open-source DO model learning algorithms.
Shiqiang Wang, Nathalie Baracaldo Angel, et al.
NeurIPS 2022
Yuan Ma, Scott C Smith, et al.
CLOUD 2024
Amit Fisher, Fabiana Fournier, et al.
SOLI 2007
Segev Wasserkrug, Avigdor Gal, et al.
IEEE TKDE