PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Exploration in Relational Worlds
Tobias Lang, Marc Toussaint and Kristian Kersting
In: ECML 2010(2010).

Abstract

One of the key problems in model-based reinforcement learning is balancing exploration and exploitation. Another is learning and acting in large relational domains, in which there is a varying number of objects and relations between them. We provide a solution to exploring large relational Markov decision processes by developing relational extensions of the concepts of the Explicit Explore or Exploit (E3) algorithm. A key insight is that the inherent generalization of learnt knowledge in the relational representation has profound implications also on the exploration strategy: what in a propositional setting would be considered a novel situation and worth exploration may in the relational setting be an instance of a well-known context in which exploitation is promising. Our experimental evaluation shows the eectiveness and benet of relational exploration over several propositional benchmark approaches on noisy 3D simulated robot manipulation problems.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:8018
Deposited By:Marc Toussaint
Deposited On:16 March 2011