PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Recent Advances in Computing the NML for Discrete Bayesian Networks
Petri Myllymäki
In: First Workshop on Information Theoretic Methods in Science and Engineering, 18-20 Aug 2008, Tampere, Finland.


Bayesian networks are parametric models for multidimensional domains exhibiting complex dependencies between the dimensions (domain variables). A central problem in learning such models is how to regularize the number of parameters; in other words, how to determine which dependencies are significant and which are not. The normalized maximum likelihood (NML) distribution or code offers an information-theoretic solution to this problem. Unfortunately, computing it for arbitrary Bayesian network models appears to be computationally infeasible, but recent results have showed that it can be computed efficiently for certain restricted type of Bayesian network models. In this review paper we summarize the main results.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Invited Talk)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Theory & Algorithms
ID Code:5162
Deposited By:Petri Myllymäki
Deposited On:24 March 2009