Ehud Altman, Kenneth R. Brown, et al.
PRX Quantum
Reinforcement learning algorithms solve sequential decision-making problems in probabilistic environments by optimizing for long-term reward. The desire to use reinforcement learning in safety-critical settings inspires a recent line of work on formally constrained reinforcement learning; however, these methods place the implementation of the learning algorithm in their Trusted Computing Base. The crucial correctness property of these implementations is a guarantee that the learning algorithm converges to an optimal policy. This paper begins the work of closing this gap by developing a Coq formalization of two canonical reinforcement learning algorithms: Value and policy iteration for finite state Markov decision processes. The central results are a formalization of the Bellman optimality principle and its proof, which uses a contraction property of Bellman optimality operator to establish that a sequence converges in the infinite horizon limit. The CertRL development exemplifies how the Giry monad and mechanized metric coinduction streamline optimality proofs for reinforcement learning algorithms. The CertRL library provides a general framework for proving properties about Markov decision processes and reinforcement learning algorithms, paving the way for further work on formalization of reinforcement learning algorithms.
Ehud Altman, Kenneth R. Brown, et al.
PRX Quantum
R.B. Morris, Y. Tsuji, et al.
International Journal for Numerical Methods in Engineering
Imran Nasim, Michael E. Henderson
Mathematics
Jianke Yang, Robin Walters, et al.
ICML 2023