PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Nonparametric Hidden Markov Models
Jurgen van Gael and Zoubin Ghahramani
In: Inference and Estimation in Probabilistic Time-Series Models (2010) Cambridge University Press , Cambridge, UK , pp. 00-00.


Hidden Markov models are a rich family of probabilistic time series models with a long and successful history of applications in natural language processing, speech recognition, computer vision, bioinformatics, and many other areas of engineering, statistics, and computer science. A defining property of hidden Markov models (HMMs) is that the time series is modeled in terms of a number of discrete hidden states. Usually, the number of such states is specified in advance by the modeler, but this limits the flexibility of HMMs. Recently, attention has turned to Bayesian methods which can automatically infer the number of states in an HMM from data. A particularly elegant and flexible approach is to assume a countable but unbounded number of hidden states; this is the nonparametric Bayesian approach to hidden Markov models first introduced by Beal et al. (2002) and called the {\em infinite HMM} (iHMM). In this chapter, we review the literature on Bayesian inference in HMMs, focusing on nonparametric Bayesian models. We show the equivalence between the Polya urn interpretation of the infinite HMM and the hierarchical Dirichlet process interpretation of the iHMM in Teh et al. (2006). We describe efficient inference algorithms, including the beam sampler which uses dynamic programming. Finally, we illustrate how to use the iHMM on a simple sequence labeling task and we discuss several extensions of the iHMM.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Book Section
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:5874
Deposited By:Jurgen van Gael
Deposited On:08 March 2010