PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Bayesian Inference for Sparse Generalized Linear Models
matthias seeger, Sebastian Gerwinn and Matthias Bethge
In: 18th European Conference on Machine Learning(2007).

Abstract

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.

Postscript - Requires a viewer, such as GhostView
EPrint Type:Conference or Workshop Item (Oral)
Additional Information:Available at http://www.kyb.tuebingen.mpg.de/bs/people/seeger/
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
ID Code:3095
Deposited By:matthias seeger
Deposited On:19 December 2007