PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Reinforcement Learning in POMDPs Without Resets
Eyal Even-Dar, Sham Kakade and Yishay Mansour
In: IJCAI 2005, July 31 - August 5, 2005, Edinburgh, Scotland, UK.


We consider the most realistic reinforcement learning setting in which an agent starts in an unknown environment (the POMDP) and must follow one continuous and uninterrupted chain of experience with no access to ``resets'' or ``offline'' simulation. We provide algorithms for general POMDPs that obtain near optimal average reward. One algorithm we present has a convergence rate which depends \emph{exponentially} on a certain horizon time of an optimal policy, but has \emph{no dependence} on the number of (unobservable) states. The main building block of our algorithms is an implementation of an \emph{approximate} reset strategy, which we show always exists in every POMDP. An interesting aspect of our algorithms is how they use this strategy when balancing exploration and exploitation.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:988
Deposited By:Yishay Mansour
Deposited On:19 June 2005