PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation
Richard S. Sutton, Csaba Szepesvari and Hamid R. Maei
In: NIPS-21, 8-11 Dec 2008, Vancouver, BC, Canada.

Abstract

We introduce the first temporal-difference learning algorithm that is stable with linear function approximation and off-policy training, for any finite Markov decision process, behavior policy, and target policy, and whose complexity scales linearly in the number of parameters. We consider an i.i.d. policy-evaluation setting in which the data need not come from on-policy experience. The gradient temporal-difference (GTD) algorithm estimates the expected update vector of the TD(0) algorithm and performs stochastic gradient descent on its L2 norm. We prove that this algorithm is stable and convergent under the usual stochastic approximation conditions to the same least-squares solution as found by the LSTD, but without LSTD's quadratic computational complexity. GTD is online and incremental, and does not involve multiplying by products of likelihood ratios as in importance-sampling methods.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:4929
Deposited By:Csaba Szepesvari
Deposited On:24 March 2009