PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Model-free inverse reinforcement learning
Abdeslam Boularias, Jens Kober and Jan Peters
In: International Conference on Artificial Intelligence and Statistics, 11-13 April 2011, Florida.

Abstract

We consider the problem of imitation learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). Past work on IRL requires that an accurate model of the underlying MDP is known. However, this requirement can hardly be satisfied in practice, as learning a model of a dynamical system with a large, or continuous, state space is a challenging task. In this paper, we propose a model-free IRL algorithm, where the relative entropy between the empirical distribution of the trajectories under a uniform policy and their distribution under the learned policy is minimized by stochastic gradient descent. We compare this new approach to well-known IRL algorithms using approximate MDP models. Empirical results on simulated car racing, gridworld and ball-in-a-cup problems show that our approach is able to learn good policies from a small number of demonstrations.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:8041
Deposited By:Oliver Kroemer
Deposited On:17 March 2011