PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Regret Bounds for Gaussian Process Bandit Problems
Steffen Grunewalder, Jean-Yves Audibert, Manfred Opper and John Shawe-Taylor
In: AISTATS 2010, 13-15 May, 2010, Sardinia, Italy.


Bandit algorithms are concerned with trading exploration with exploitation where a number of options are available but we can only learn their quality by experimenting with them. We consider the scenario in which the reward distribution for arms is modelled by a Gaussian process and there is no noise in the observed reward. Our main result is to bound the regret experienced by algorithms relative to the a posteriori optimal strategy of playing the best arm throughout based on benign assumptions about the covariance function dening the Gaussian process. We further complement these upper bounds with corresponding lower bounds for particular covariance functions demonstrating that in general there is at most a logarithmic looseness in our upper bounds.

EPrint Type:Conference or Workshop Item (Poster)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:5869
Deposited By:Steffen Grunewalder
Deposited On:08 March 2010