PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path
András Antos, Csaba Szepesvari and Rémi Munos
In: The Nineteenth Annual Conference on Learning Theory, COLT 2006, Proceedings Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence , 4005 . (2006) Springer-Verlag , Berlin, Heidelberg, Germany , pp. 574-588. ISBN 978-3-540-35294-5

Abstract

We consider batch reinforcement learning problems in continuous space, expected total discounted-reward Markovian Decision Problems. As opposed to previous theoretical work, we consider the case when the training data consists of a single sample path (trajectory) of some behaviour policy. In particular, we do not assume access to a generative model of the environment. The algorithm studied is policy iteration where in successive iterations the Q-functions of the intermediate policies are obtained by means of minimizing a novel Bellman-residual type error. PAC-style polynomial bounds are derived on the number of samples needed to guarantee near-optimal performance where the bound depends on the mixing rate of the trajectory, the smoothness properties of the underlying Markovian Decision Problem, the approximation power and capacity of the function set used.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Book Section
Additional Information:(Pittsburgh, PA, USA, June 22-25, 2006.)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:3806
Deposited By:András Antos
Deposited On:22 February 2008