Approximation of Discounted Minimax Markov Control Problems and Zero-Sum Markov Games Using Hausdorff and Wasserstein Distances
DUFOUR, François
Institut Polytechnique de Bordeaux [Bordeaux INP]
Quality control and dynamic reliability [CQFD]
Institut de Mathématiques de Bordeaux [IMB]
Institut Polytechnique de Bordeaux [Bordeaux INP]
Quality control and dynamic reliability [CQFD]
Institut de Mathématiques de Bordeaux [IMB]
DUFOUR, François
Institut Polytechnique de Bordeaux [Bordeaux INP]
Quality control and dynamic reliability [CQFD]
Institut de Mathématiques de Bordeaux [IMB]
< Réduire
Institut Polytechnique de Bordeaux [Bordeaux INP]
Quality control and dynamic reliability [CQFD]
Institut de Mathématiques de Bordeaux [IMB]
Langue
en
Article de revue
Ce document a été publié dans
Dynamic Games and Applications. 2019, vol. 9, n° 1, p. 68-102
Springer Verlag
Résumé en anglais
This paper is concerned with a minimax control problem (also known as a robust Markov decision process (MDP) or a game against nature) with general state and action spaces under the discounted cost optimality criterion. ...Lire la suite >
This paper is concerned with a minimax control problem (also known as a robust Markov decision process (MDP) or a game against nature) with general state and action spaces under the discounted cost optimality criterion. We are interested in approximating numerically the value function and an optimal strategy of this general discounted minimax control problem. To this end, we derive structural Lipschitz continuity properties of the solution of this robust MDP by imposing suitable conditions on the model, including Lipschitz continuity of the elements of the model and absolute continuity of the Markov transition kernel with respect to some probability measure μ. Then, we are able to provide an approximating minimax control model with finite state and action spaces, and hence computationally tractable, by combining these structural properties with a suitable discretization procedure of the state space (related to a probabilistic criterion) and the action spaces (associated to a geometric criterion). Finally, it is shown that the corresponding approximation errors for the value function and the optimal strategy can be controlled in terms of the discretization parameters. These results are also extended to a two-player zero-sum Markov game< Réduire
Mots clés en anglais
Robust Markov decision process
Approximation of control models
Wasserstein distance
Minimax control problem
Origine
Importé de halUnités de recherche