PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

On sequentially normalized maximum likelihood models
Teemu Roos and Jorma Rissanen
In: 2008 Workshop on Information Theoretic Methods in Science and Engineering, 18-20 Aug 2008, Tampere, Finland.


The important normalized maximum likelihood (NML) distribution is obtained via a normalization over all sequences of given length. It has two short-comings: the resulting model is usually not a random process, and in many cases, the normalizing integral or sum is hard to compute. In contrast, the recently proposed sequentially normalized maximum likelihood (SNML) models always comprise a random process and are often much easier to compute. We present some results on SNML type models in the Markovian and linear–Gaussian model classes. In the linear–Gaussian case, the resulting sequentially normalized least squares (SNLS) model is particularly interesting. The associated sequentially minimized squared deviations are smaller than both the usual least squares and the squared prediction errors used in the so called predictive least squares (PLS) criterion. The SNLS model is asymptotically optimal within the given class of distributions by reaching the lower bound on the logarithmic prediction errors, given by the stochastic complexity, up to lower-order terms.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Invited Talk)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
ID Code:4187
Deposited By:Teemu Roos
Deposited On:23 October 2008