PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Marc Toussaint and Amos Storkey
Proceedings of 23nd International Conference on Machine Learning (ICML 2006) 2006.

There is a more recent version of this eprint available. Click here to view it.

Abstract

Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. A particularly interesting aspect of the approach is that any existing inference technique in DBNs now becomes available for answering behavioral questions–including those on continuous, factorial, or hierarchical state representations. Here we present an Expectation Maximization algorithm for computing optimal policies. Unlike previous approaches we can show that this actually optimizes the discounted expected future return for arbitrary reward functions and without assuming an ad hoc finite total time. The algorithm is generic in that any inference technique can be utilized in the E-step. We demonstrate this for exact inference on a discrete maze and Gaussian belief state propagation in continuous stochastic optimal control problems.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:2294
Deposited By:Amos Storkey
Deposited On:09 November 2006

Available Versions of this Item