PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Reinforcement learning with the use of costly features
Robby Goetschalckx, Scott Sanner and Kurt Driessens
In: Proceedings of the 18th European Conference on Artificial Intelligence (ECAI-08), 21-25 July 2008, Patras, Greece.


In many practical reinforcement learning problems, the state space is too large to permit an exact representation of the value function, much less the time required to compute it. In such cases, a common solution approach is to compute an approximation of the value function in terms of state features. However, relatively little attention has been paid to the cost of computing these state features. For example, search-based features may be useful for value prediction, but their computational cost must be traded off with their impact on value accuracy. To this end, we introduce a new cost-sensitive sparse linear regression paradigm for value function approximation in reinforcement learning where the learner is able to select only those costly features that are sufficiently informative to justify their computation. We illustrate the learning behavior of our approach using a simple experimental domain that allows us to explore the effects of a range of costs on the cost-performance trade-off.

EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:5276
Deposited By:Scott Sanner
Deposited On:24 March 2009