PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

A Fast Normalized Maximum Likelihood Algorithm for Multinomial Data
Petri Kontkanen and Petri Myllymäki
In: 19th International Joint Conference on Artificial Intelligence, 30 Jul - 5 Aug 2005, Edinburgh, Scotland.


Stochastic complexity of a data set is defined as the shortest possible code length for the data obtainable by using some fixed set of models. This measure is of great theoretical and practical importance as a tool for tasks such as model selection or data clustering. In the case of multinomial data, computing the modern version of stochastic complexity, defined as the Normalized Maximum Likelihood (NML) criterion, requires computing a sum with an exponential number of terms. Furthermore, in order to apply NML in practice, one often needs to compute a whole table of these exponential sums. In our previous work, we were able to compute this table by a recursive algorithm. The purpose of this paper is to significantly improve the time complexity of this algorithm. The techniques used here are based on the discrete Fourier transform and the convolution theorem.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Poster)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Theory & Algorithms
ID Code:1079
Deposited By:Petri Kontkanen
Deposited On:15 September 2005