PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Bayesian Learning via Stochastic Gradient Langevin Dynamics
Max Welling and Yee Whye Teh
In: ICML 2011, 28 Jun - 02 Jul 2011, Washington, USA.

Abstract

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an inbuilt protection against overfitting. We also propose a practical method for Monte Carlo estimates of posterior statistics which monitors a “sampling threshold” and collects samples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with natural gradients.

EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:9395
Deposited By:Yee Whye Teh
Deposited On:16 March 2012