Bayesian Gaussian Process Latent Variable Model
Michalis Titsias and Neil Lawrence
In: Thirteenth International Conference on Artificial Intelligence and Statistics, May 13-15, 2010, Sardinia, Italy.
We introduce a variational inference framework for training
the Gaussian process latent variable model and thus performing
Bayesian nonlinear dimensionality reduction. This method allows us to variationally integrate out the input variables of the Gaussian process and
compute a lower bound on the exact marginal likelihood of the
nonlinear latent variable model. The maximization of the
variational lower bound provides a Bayesian training procedure that is robust to overfitting and can automatically select the dimensionality of the
nonlinear latent space. We demonstrate our method on real world
datasets. The focus in this paper is on dimensionality reduction
problems, but the methodology is more general. For example, our
algorithm is immediately applicable for training Gaussian process
models in the presence of missing or uncertain inputs.