## AbstractHidden Markov models are a rich family of probabilistic time series models with a long and successful history of applications in natural language processing, speech recognition, computer vision, bioinformatics, and many other areas of engineering, statistics, and computer science. A defining property of hidden Markov models (HMMs) is that the time series is modeled in terms of a number of discrete hidden states. Usually, the number of such states is specified in advance by the modeler, but this limits the flexibility of HMMs. Recently, attention has turned to Bayesian methods which can automatically infer the number of states in an HMM from data. A particularly elegant and flexible approach is to assume a countable but unbounded number of hidden states; this is the nonparametric Bayesian approach to hidden Markov models first introduced by Beal et al. (2002) and called the {\em infinite HMM} (iHMM). In this chapter, we review the literature on Bayesian inference in HMMs, focusing on nonparametric Bayesian models. We show the equivalence between the Polya urn interpretation of the infinite HMM and the hierarchical Dirichlet process interpretation of the iHMM in Teh et al. (2006). We describe efficient inference algorithms, including the beam sampler which uses dynamic programming. Finally, we illustrate how to use the iHMM on a simple sequence labeling task and we discuss several extensions of the iHMM.
[Edit] |