NONLINEAR PROGRAMMING IN APPROXIMATE DYNAMIC
PROGRAMMING: BANG-BANG SOLUTIONS,
STOCK-MANAGEMENT AND UNSMOOTH PENALTIES
Olivier Teytaud and Sylvain Gelly
In: ICINCO07, Angers(2007).
Many stochastic dynamic programming tasks in continuous action-spaces are tackled through discretization.
We here avoid discretization; then, approximate dynamic programming (ADP) involves (i) many learning
tasks, performed here by Support Vector Machines, for Bellman-function-regression (ii) many non-linear-
optimization tasks for action-selection, for which we compare many algorithms. We include discretizations
of the domain as particular non-linear-programming-tools in our experiments, so that by the way we compare
optimization approaches and discretization methods. We conclude that robustness is strongly required in the
non-linear-optimizations in ADP, and experimental results show that (i) discretization is sometimes inefﬁcient,
but some speciﬁc discretization is very efﬁcient for ”bang-bang” problems (ii) simple evolutionary tools out-
perform quasi-random in a stable manner (iii) gradient-based techniques are much less stable (iv) for most
high-dimensional ”less unsmooth” problems Covariance-Matrix-Adaptation is ﬁrst ranked.