par Křetínský, Jan;Perez, Guillermo A. ;Raskin, Jean-François
Référence Leibniz international proceedings in informatics, 118, 8
Publication Publié, 2018-08
Article révisé par les pairs
Résumé : We formalize the problem of maximizing the mean-payo value with high probability while satisfying a parity objective in a Markov decision process (MDP) with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and mean-payo objectives can be achieved depending on how much memory one is willing to use. (i) For all ε and γ we can construct an online-learning finite-memory strategy that almost-surely satisfies the parity objective and which achieves an ε-optimal mean payo with probability at least 1 − γ. (ii) Alternatively, for all ε and γ there exists an online-learning infinite-memory strategy that satisfies the parity objective surely and which achieves an ε-optimal mean payo with probability at least 1 − γ. We extend the above results to MDPs consisting of more than one end component in a natural way. Finally, we show that the aforementioned guarantees are tight, i.e. there are MDPs for which stronger combinations of the guarantees cannot be ensured.