PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Exploiting Similarity Information in Reinforcement Learning. Similarity Models for Multi-Armed Bandits and MDPs
Ronald Ortner
In: 2nd International Conference on Agents and Artificial Intelligence (ICAART 2010), 22 - 24 January 2010, Valencia, Spain.

Abstract

This paper considers reinforcement learning problems with additional similarity information. We start with the simple setting of multi-armed bandits in which the learner knows for each arm its color, where it is assumed that arms of the same color have close mean rewards. An algorithm is presented that shows that this color information can be used to improve the dependency of online regret bounds on the number of arms. Further, we discuss to what extent this approach can be extended to the more general case of Markov decision processes. For the simplest case where the same color for actions means similar rewards and identical transition probabilities, an algorithm and a corresponding online regret bound are given. For the general case where transition probabilities of same-colored actions imply only close but not necessarily identical transition probabilities we give upper and lower bounds on the error by action aggregation with respect to the color information. These bounds also imply that the general case is far more difficult to handle.

EPrint Type:Conference or Workshop Item (Talk)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:6043
Deposited By:Ronald Ortner
Deposited On:08 March 2010