PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Exploiting Similarity Information in Reinforcement Learning. Similarity Models for Multi-Armed Bandits and MDPs
Ronald Ortner
In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence (2010) INSTICC , pp. 203-210. ISBN 978-989-674-021-4

Abstract

This paper considers reinforcement learning problems with additional similarity information. We start with the simple setting of multi-armed bandits in which the learner knows for each arm its color, where it is assumed that arms of the same color have close mean rewards. An algorithm is presented that shows that this color information can be used to improve the dependency of online regret bounds on the number of arms. Further, we discuss to what extent this approach can be extended to the more general case of Markov decision processes. For the simplest case where the same color for actions means similar rewards and identical transition probabilities, an algorithm and a corresponding online regret bound are given. For the general case where transition probabilities of same-colored actions imply only close but not necessarily identical transition probabilities we give upper and lower bounds on the error by action aggregation with respect to the color information. These bounds also imply that the general case is far more difficult to handle.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Book Section
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:6042
Deposited By:Ronald Ortner
Deposited On:08 March 2010