PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Experts in a Markov Decision Process
Eyal Even-Dar, Sham Kakade and Yishay Mansour
In: NIPS, December 14-16, 2004, Vancouver, B.C., Canada.

There is a more recent version of this eprint available. Click here to view it.


We consider the MDP setting in which the reward function is chosen arbitrarily (possibly by an adversary) during each time step of play, yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well can an agent do when compared to the reward achieved under the best stationary policy over time. We provide \emph{efficient} algorithms, which have regret bounds with \emph{no dependence} on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions. We also show that in the case that the dynamics change over time, the problem becomes computationally hard.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:193
Deposited By:Yishay Mansour
Deposited On:23 November 2004

Available Versions of this Item