A platform for massive agent-based simulation and its evaluation
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learning algorithms to the case of randomized algorithms. We give formal definitions of stability for randomized algorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability. The setup we develop for this purpose can be also used for generally studying randomized learning algorithms. We then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learning algorithms.
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008
Yehuda Naveli, Michal Rimon, et al.
AAAI/IAAI 2006
Yale Song, Zhen Wen, et al.
IJCAI 2013
Shyam Marjit, Harshit Singh, et al.
WACV 2025