par Chatenet, Nathalie;Bersini, Hugues
Référence Lecture notes in computer science, 1327, page (284-288)
Publication Publié, 1997
Référence Lecture notes in computer science, 1327, page (284-288)
Publication Publié, 1997
Article révisé par les pairs
Résumé : | In a lot of reinforcement learning applications and solutions, time information is implicitly contained in state information. Although time sequential decomposition is inherent to dynamic programming, this aspect has been simply omitted in usual Q_Learning applications. A non stationary environment, a non stationary reward/punishment or a time dependent cost to minimize will naturally lead to non stationary optimal solutions in which time has to be explicitly accounted in the search for the optimal solution. Although largely neglected so far, non stationarity and the computational cost it is likely to rapidly induce, should instead becomes of concern to the reinforcement learning community. The particular nature of time entails a dedicated processing when attempting to develop economical and heuristic solutions. In this paper two of such possible heuristics are proposed, justified and illustrated on a simple application. |