Gaussian Process Training with Input Noise
Andrew McHutchon and Carl Edward Rasmussen
In: NIPS 2011, 12-17th December 2011, Granada, Spain.
In standard Gaussian Process regression input locations are assumed to be noise
free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local
linear expansion about each input point. This allows the input noise to be recast
as output noise proportional to the squared gradient of the GP posterior mean.
The input noise variances are inferred from the data as extra hyperparameters.
They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which
alternates between optimising the hyperparameters and calculating the posterior
gradient. Analytic predictive moments can then be found for Gaussian distributed
test points. We compare our model to others over a range of different regression
problems and show that it improves over current methods.