PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

A Minimum Relative Entropy Principle for Learning and Acting
Pedro Alejandro Ortega and Daniel Alexander Braun
Journal of Artificial Intelligence Research Volume 38, pp. 475-511, 2010. ISSN 1076-9757

Abstract

This paper proposes a method to construct an adaptive agent that is universal with respect to a given class of experts, where each expert is designed specifically for a particular environment. This adaptive control problem is formalized as the problem of minimizing the relative entropy of the adaptive agent from the expert that is most suitable for the unknown environment. If the agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:6984
Deposited By:Pedro Ortega
Deposited On:17 September 2010