Penny Chong, Laura Wynter, et al.
ICDM 2023
We consider a new family of stochastic operators for reinforcement learning that seek to alleviate negative effects and become more robust to approximation or estimation errors. Theoretical results are established, showing that our family of operators preserve optimality and increase the action gap in a stochastic sense. Empirical results illustrate the strong benefits of our robust stochastic operators, significantly outperforming the classical Bellman and recently proposed operators.
Penny Chong, Laura Wynter, et al.
ICDM 2023
Peihao Wang, Rameswar Panda, et al.
ICML 2023
Kai Shen, Lingfei Wu, et al.
IJCAI 2020
Thanh Nguyen, Natasha Mulligan, et al.
AMIA Informatics Summit 2021