PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Approximate Expectation Maximization
Tom Heskes, Wim Wiegerinck and Onno Zoeter
In: Neural Information Processing Systems Conference, NIPS 03, 7-11 Dec 2003, Vancouver and Whistler, British Columbia, Canada.


We discuss the integration of the expectation-maximization (EM) algorithm for maximum likelihood learning of Bayesian networks with belief propagation algorithms for approximate inference. Specifically we propose to combine the outer-loop step of convergent belief propagation algorithms with the M-step of the EM algorithm. This then yields an approximate EM algorithm that is essentially still double loop, with the important advantage of an inner loop that is guaranteed to converge. Simulations illustrate the merits of such an approach.

EPrint Type:Conference or Workshop Item (Poster)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:862
Deposited By:Bert Kappen
Deposited On:02 January 2005