par Doyen, Laurent ;Shirmohammadi, Mahsa ;Massart, Thierry
Référence Lecture notes in computer science, 8704 LNCS, page (234-248)
Publication Publié, 2014
Référence Lecture notes in computer science, 8704 LNCS, page (234-248)
Publication Publié, 2014
Article révisé par les pairs
Résumé : | We consider synchronizing properties of Markov decision processes (MDP), viewed as generators of sequences of probability distributions over states. A probability distribution is p-synchronizing if the probability mass is at least p in some state, and a sequence of probability distributions is weakly p-synchronizing, or strongly p-synchronizing if respectively infinitely many, or all but finitely many distributions in the sequence are p-synchronizing. For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all ε>0, a (1-ε)-synchronizing sequence; (iii) limit-sure winning if for all ε>0, there is a strategy that produces a (1-ε)-synchronizing sequence. For each synchronizing and winning mode, we consider the problem of deciding whether an MDP is winning, and we establish matching upper and lower complexity bounds of the problems, as well as the optimal memory requirement for winning strategies: (a) for all winning modes, we show that the problems are PSPACE-complete for weak synchronization, and PTIME-complete for strong synchronization; (b) we show that for weak synchronization, exponential memory is sufficient and may be necessary for sure winning, and infinite memory is necessary for almost-sure winning; for strong synchronization, linear-size memory is sufficient and may be necessary in all modes; (c) we show a robustness result that the almost-sure and limit-sure winning modes coincide for both weak and strong synchronization. © 2014 Springer-Verlag. |