PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Regularized Fitted Q-iteration for Planning in Continuous-Space Markovian Decision Problems
Amir-massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesvari and Shie Mannor
In: Proceedings of the 2009 American Control Conference (ACC-2009) (2009) IEEE . ISBN 142444523X

Abstract

Reinforcement learning with linear and non-linear function approximation has been studied extensively in the last decade. However, as opposed to other fields of machine learning such as supervised learning, the effect of finite sample has not been thoroughly addressed within the reinforcement learning framework. In this paper we propose to use L2 regularization to control the complexity of the value function in reinforcement learning and planning problems. We consider the Regularized Fitted Q-Iteration algorithm and provide generalization bounds that account for small sample sizes. Finally, a realistic visual-servoing problem is used to illustrate the benefits of using the regularization procedure.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Book Section
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:6126
Deposited By:Mohammad Ghavamzadeh
Deposited On:08 March 2010