Singularly Perturbed Discounted Markov Control Processes in a General State Space
DUFOUR, François
Institut de Mathématiques de Bordeaux [IMB]
Quality control and dynamic reliability [CQFD]
Institut de Mathématiques de Bordeaux [IMB]
Quality control and dynamic reliability [CQFD]
DUFOUR, François
Institut de Mathématiques de Bordeaux [IMB]
Quality control and dynamic reliability [CQFD]
< Réduire
Institut de Mathématiques de Bordeaux [IMB]
Quality control and dynamic reliability [CQFD]
Langue
en
Article de revue
Ce document a été publié dans
SIAM Journal on Control and Optimization. 2012, vol. 50, n° 2, p. 720-747
Society for Industrial and Applied Mathematics
Résumé en anglais
This work studies the asymptotic optimality of discrete-time Markov Decision Processes (MDP's in short) with general state space and action space and having weak and strong interactions. By using a similar approach as ...Lire la suite >
This work studies the asymptotic optimality of discrete-time Markov Decision Processes (MDP's in short) with general state space and action space and having weak and strong interactions. By using a similar approach as developed in Liu 2001, the idea in this paper is to consider a MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter $\epsilon >0$ in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as $\epsilon$ goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.< Réduire
Origine
Importé de halUnités de recherche