par Chakraborty, Debraj ;Busatto-Gaston, Damien ;Raskin, Jean-François ;Perez, Guillermo A.
Référence Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2023-May, page (1354-1362)
Publication Publié, 2023-06-01
Référence Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2023-May, page (1354-1362)
Publication Publié, 2023-06-01
Article révisé par les pairs
Résumé : | We study how to efficiently combine formal methods, Monte Carlo Tree Search (MCTS), and deep learning in order to produce high-quality receding horizon policies in large Markov Decision processes (MDPs). In particular, we use model-checking techniques to guide the MCTS algorithm in order to generate offline samples of high-quality decisions on a representative set of states of the MDP. Those samples can then be used to train a neural network that imitates the policy used to generate them. This neural network can either be used as a guide on a lower-latency MCTS online search, or alternatively be used as a full-fledged policy when minimal latency is required. We use statistical model checking to detect when additional samples are needed and to focus those additional samples on configurations where the learnt neural network policy differs from the (computationally-expensive) offline policy. We illustrate the use of our method on MDPs that model the Frozen Lake and Pac-Man environments ' two popular benchmarks to evaluate reinforcement-learning algorithms. |