PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Near-optimal Regret Bounds for Reinforcement Learning
Peter Auer, Thomas Jaksch and Ronald Ortner
In: Nips 2008, 8-13 Dec 2008, Vancouver, Canada.

Abstract

We present an algorithm, Ucrl2, which we show to be nearly optimal by a new analysis of the "Optimism in the face of uncertainty" paradigm. We consider undiscounted rewards and bound the regret, which is the sum of missed rewards (also during learning!) compared to an optimal policy. In order to describe the transition structure of an MDP we propose a new parameter: An MDP has diameter D if for any pair of states s, s' there is a policy which moves from s to s' in at most D steps (on average). We provide the best known bounds for undiscounted reinforcement learning. The total regret of Ucrl2 is O(DS sqrt(AT log(T/delta) )) after T steps for any unknown MDP with S states, A actions per state, and diameter D. This bound holds with high probability and corresponds to a PAC-like bound of Omega(D^2 S^2 A/epsilon^2 log(DSA/epsilon/delta) ) steps until the average per step regret is at most epsilon. We also present a lower bound of Omega(sqrt(DSAT)) on the total regret of any learning algorithm. These new bounds demonstrate the utility of the diameter as structural parameter of an MDP.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Poster)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
ID Code:4595
Deposited By:Thomas Jaksch
Deposited On:13 March 2009