A convex programming approach for discrete-time Markov decision processes under the expected total reward criterion
hal.structure.identifier | Quality control and dynamic reliability [CQFD] | |
hal.structure.identifier | Institut Polytechnique de Bordeaux [Bordeaux INP] | |
hal.structure.identifier | Institut de Mathématiques de Bordeaux [IMB] | |
dc.contributor.author | DUFOUR, François | |
hal.structure.identifier | Quality control and dynamic reliability [CQFD] | |
hal.structure.identifier | Institut de Mathématiques de Bordeaux [IMB] | |
dc.contributor.author | GENADOT, Alexandre | |
dc.date.accessioned | 2024-04-04T03:01:24Z | |
dc.date.available | 2024-04-04T03:01:24Z | |
dc.identifier.uri | https://oskar-bordeaux.fr/handle/20.500.12278/192897 | |
dc.description.abstractEn | In this work, we study discrete-time Markov decision processes (MDPs) under constraints with Borel state and action spaces and where all the performance functions have the same form of the expected total reward (ETR) criterion over the infinite time horizon. One of our objective is to propose a convex programming formulation for this type of MDPs. It will be shown that the values of the constrained control problem and the associated convex program coincide and that if there exists an optimal solution to the convex program then there exists a stationary randomized policy which is optimal for the MDP. It will be also shown that in the framework of constrained control problems, the supremum of the expected total rewards over the set of randomized policies is equal to the supremum of the expected total rewards over the set of stationary randomized policies. We consider standard hypotheses such as the so-called continuity-compactness conditions and a Slater-type condition. Our assumptions are quite weak to deal with cases that have not yet been addressed in the literature. An example is presented to illustrate our results with respect to those of the literature. | |
dc.language.iso | en | |
dc.subject.en | Convex program | |
dc.subject.en | Markov decision process | |
dc.subject.en | Expected total reward criterion | |
dc.subject.en | Occupation measure | |
dc.subject.en | Constraints | |
dc.title.en | A convex programming approach for discrete-time Markov decision processes under the expected total reward criterion | |
dc.type | Document de travail - Pré-publication | |
dc.subject.hal | Mathématiques [math]/Probabilités [math.PR] | |
dc.subject.hal | Mathématiques [math]/Optimisation et contrôle [math.OC] | |
dc.identifier.arxiv | 1903.08853 | |
bordeaux.hal.laboratories | Institut de Mathématiques de Bordeaux (IMB) - UMR 5251 | * |
bordeaux.institution | Université de Bordeaux | |
bordeaux.institution | Bordeaux INP | |
bordeaux.institution | CNRS | |
hal.identifier | hal-02071036 | |
hal.version | 1 | |
hal.origin.link | https://hal.archives-ouvertes.fr//hal-02071036v1 | |
bordeaux.COinS | ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.au=DUFOUR,%20Fran%C3%A7ois&GENADOT,%20Alexandre&rft.genre=preprint |
Fichier(s) constituant ce document
Fichiers | Taille | Format | Vue |
---|---|---|---|
Il n'y a pas de fichiers associés à ce document. |