Probabilistic Inference for Fast Learning in Control
Carl Edward Rasmussen and Marc Deisenroth
Recent Advances in Reinforcement Learning
Lecture Notes in Computer Science
We provide a novel framework for very fast model-based reinforcement
learning in continuous state and action spaces.
The framework requires probabilistic models that explicitly
characterize their levels of confidence. Within this framework, we use
flexible, non-parametric models to describe the world based on
previously collected experience.
learning on the cart-pole problem in a setting where we provide very
limited prior knowledge about the task. Learning progresses rapidly,
and a good policy is found after only a hand-full of iterations.