PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Markov Decision Processes with Arbitrarily Varying Rewards
Jia Yuan Yu, Shie Mannor and Nahum Shimkin
Mathematics of Operations Research Volume 34, Number 3, pp. 737-757, 2009.

Abstract

We consider a learning problem where the decision maker interacts with a standard Markov decision process, with the exception that the reward functions vary arbitrarily over time. We show that, against every possible realization of the reward process, the agent can perform as well - in hindsight -as every stationary policy. This generalizes the classical no-regret result for repeated games. Specifically, we present an efficient online algorithm - in the spirit of reinforcement learning - that ensures that the agent’s average performance loss vanishes over time, provided that the environment is oblivious to the agent’s actions. Moreover, it is possible to modify the basic algorithm to cope with instances where reward observations are limited to the agent’s trajectory. We present further modifications that reduce the computational cost by using function approximation and that track the optimal policy through infrequent changes.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:5928
Deposited By:Nahum Shimkin
Deposited On:08 March 2010