Finite-Sample Analysis of Lasso-TD
Mohammad Ghavamzadeh, Alessandro Lazaric, Remi Munos and Matthew Hoffman
In: Twenty-Eighth International Conference on Machine Learning (ICML-2011), Seattle, Washington, USA(2011).
In this paper, we analyze the performance of Lasso-TD, a modification of LSTD in which the projection operator is defined as a Lasso problem. We first show that Lasso- TD is guaranteed to have a unique fixed point and its algorithmic implementation coincides with the recently presented LARS-TD and LC-TD methods. We then derive two bounds on the prediction error of Lasso-TD in the Markov design setting, i.e., when the performance is evaluated on the same states used by the method. The first bound makes no assumption, but has a slow rate w.r.t. the number of samples. The second bound is under an assumption on the empirical Gram matrix, called the compatibility condition, but has an improved rate and directly relates the prediction error to the sparsity of the value function in the feature space at hand.