A heuristic strategy for learning in partially observable and non-Markovian domains
Matteo Leonetti and Subramanian Ramamoorthy
In Proc. International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS 2010).
Robotic applications are characterized by highly dynamic domains, where the agent has neither full control of the environment nor full observability. In those cases a Markovian model of the domain, able to capture all the aspects that the agent might need to predict, is generally not available or excessively complex. Moreover, robots pose relevant constraints on the amount of experience they can afford, moving the focus of learning their behavior from reaching optimality in the limit, to making the best use of the little information available. We consider the problem of ﬁnding the best deterministic policy in a Non-Markovian Decision Process, with a special attention to the sample complexity and the transitional behavior before such a policy is reached. We would like robotic agents to learn in real time while being deployed in the environment, and their behavior to be acceptable even while learning.