PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Approximate inference for the loss-calibrated Bayesian
Simon Lacoste-Julien, Ferenc Huszar and Zoubin Ghahramani
In: AISTATS 2011, April 11-13, 2011, Fort Lauderdale, FL, USA.


We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task -- and associated losses -- for which the posterior could be used. We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand. We present a general framework rooted in Bayesian decision theory to analyze approximate inference from the perspective of losses, opening up several research directions. As a first loss-calibrated approximate inference attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when losses are asymmetric.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:8297
Deposited By:Simon Lacoste-Julien
Deposited On:09 September 2011