Selected Publication:
SHR
Neuro
Cancer
Cardio
Lipid
Metab
Microb
Hajiabolhassan, H; Ortner, R.
Online Regret Bounds for Satisficing in Markov Decision Processes
MATH OPER RES. 2025;
Doi: 10.1287/moor.2023.0275
Web of Science
FullText
FullText_MUG
- Leading authors Med Uni Graz
-
Hajiabolhassan Hossein
- Altmetrics:
- Dimensions Citations:
- Plum Analytics:
- Scite (citation analytics):
- Abstract:
- We consider general reinforcement learning under the average reward criterion in Markov decision processes (MDPs), when the learner's goal is not to learn an optimal policy, but accepts any policy whose average reward is above a given satisfaction level a. We show that with this more modest objective, it is possible to give algorithms that only have constant regret with respect to the level a, provided that there is a policy above this level. This is a generalization of known results from the bandit setting to MDPs. Further, we present a more general algorithm that achieves the best of both worlds: If the optimal policy has average reward above a, this algorithm has bounded regret with respect to a. On the other hand, if all policies are below a, then the expected regret with respect to the optimal policy is bounded as for the UCRL2 algorithm.
- Find related publications in this database (Keywords)
-
reinforcement learning
-
Markov decision process
-
regret