A convergence theorem for variational EM-like algorithms: application to image segmentation
Variational Expectation Maximization (VEM) algorithms refer to a class of procedures resulting from the introduction in standard Expectation Maximization (EM) algorithms of variational approximation principles. They have been popular in cases where the E-step of EM is intractable which occurs often outside simple or standard cases. Unfortunately, for inference in hidden Markov random fields and therefore in many applications of interest regarding image analysis, they do not solve the additional problem of an intractable M-step. In this work, we propose a new class of algorithms that can be viewed as stochastic perturbations of VEM algorithms together with theoretical tools to study the convergence properties of these procedures. We focus more specifically on one of this perturbation that we name Monte Carlo VEM (MCVEM). The resulting algorithm has the advantage that it is tractable in practice and we are able to prove its convergence: the MCVEM paths have almost surely the same limit set as the VEM paths. Thus the theoretical justification for the algorithm is established. In addition, experiments on synthetic and real-world images show that the algorithm performance is very closed and sometimes better than that of other existing variational EM-like algorithms.