Bayesian network structure learning using factorized NML universal models
Teemu Roos, Tomi Silander, Petri Kontkanen and Petri Myllymäki
In: 2008 Information Theory and Applications Workshop, 27 Jan - 01 Feb 2008, San Diego, USA.
Universal codes/models can be used for data compression and model selection by the minimum description length (MDL) principle. For many interesting model classes, such as Bayesian networks, the minimax regret optimal normalized maximum likelihood (NML) universal model is computationally very demanding. We suggest a computationally feasible alternative to NML for Bayesian networks, the factorized NML universal model, where the normalization is done locally for each variable. This can be seen as an approximate sum-product algorithm. We show that this new universal model performs extremely well in model selection, compared to the existing state-of-the-art, even for small sample sizes.