PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Finite-Sample Analysis of LSTD
Alessandro Lazaric, Mohammad Ghavamzadeh and Rémi Munos
In: Twenty-Seventh International Conference on Machine Learning (ICML-2010), 21-24 June 2010, Haifa, Israel.

Abstract

In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is $\beta$-mixing.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:7228
Deposited By:Alessandro Lazaric
Deposited On:12 March 2011