Exploiting Data-Independence for Fast Belief-Propagation
Julian McAuley and Tiberio Caetano
International Conference on Machine Learning
Maximum a posteriori (MAP) inference in graphical models requires that we maximize the sum of two terms: a data-dependent term, encoding the conditional likelihood of a certain labeling given an observation, and a data-independent term, encoding some prior on labelings. Often, the data-dependent factors contain fewer latent variables than the data-independent factors -- for instance, many grid and tree-structured models consist of only first-order conditionals despite having pairwise priors. In this paper, we note that MAP-inference in any such graphical model can be made substantially faster by appropriately preprocessing its data-independent terms. Our main result is to show that message-passing in any such pairwise model has an expected-case exponent of only $1.5$ on the number of states per node, leading to significantly faster algorithms than the standard quadratic time solution.