PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Approximate dynamic programming
Rémi Munos
In: Markov Decision Processes in Artificial Intelligence (2010) ISTE Ltd and John Wiley & Sons Inc , pp. 67-98.

Abstract

In any complex or large scale sequential decision making problem, there is a crucial need to use function approximation to represent the relevant functions such as the value function or the policy. The Dynamic Programming (DP) and Reinforcement Learning (RL) methods introduced in previous chapters make the implicit assumption that the value function can be perfectly represented (i.e. kept in memory), for example by using a look-up table (with a finite number of entries) assigning a value to all possible states (assumed to be finite) of the system. Those methods are called exact because they provide an exact computation of the optimal solution of the considered problem (or at least, enable the computations to converge to this optimal solution). However, such methods often apply to toy problems only, since in most interesting applications, the number of possible states is so large (and possibly infinite if we consider continuous spaces) that a perfect representation of the function at all states is impossible. It becomes necessary to approximate the function by using a moderate number of coefficients (which can be stored in a computer), and therefore extend the range of DP and RL to methods using such approximate representations. These approximate methods combine DP and RL methods with function approximation tools.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Book Section
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:7410
Deposited By:Rémi Munos
Deposited On:17 March 2011