Category level object segmentation by combining bag-of-words models and Markov Random Fields
This paper presents an approach to segment unseen objects of known categories. At the heart of the approach lies a probabilistic model of images which captures local appearance of objects through a bag-of-words representation. Bag-of-words models have been very successful for image categorization; however, as they model objects as loose collections of small image patches, they can not accurately predict object boundaries. On the other hand, Markov Random Fields (MRFs), which are often used in many low-level application for general purpose image segmentation, do incorporate the spatial layout of images. Yet, as they are usually based on very local image evidence they fail to capture larger scale structures needed to recognize object categories under large appearance variations. The main contribution of this article is to combine the advantages of both approaches into a single probabilistic model. First, a mechanism based on a bag-of-words representation produces object recognition and localization at a rough spatial resolution. Second, a MRF component enforces precise object boundaries, guided by local image cues (color, texture, and edges) and by long-distance dependencies. Gibbs sampling is used to infer the model parameters and the object segmentation. The proposed method successfully segments object categories, despite highly varying appearances, cluttered backgrounds and large viewpoint changes. Through a series of experiments, we emphasize the strength as well as the limitation of our model. First, we evaluate the results of several strategies for building the visual vocabulary. Second, we show how it is possible to combine strong labeling (segmented images) with weak labeling (images annotated with bounding boxes), in order to limit the amount of training data needed to learn the model. Third, we study the influence of the initialization on the model estimation. Last, we present extensive experiments on four different image databases, including the challenging Pascal VOC 2007 dataset on which we obtain state-of-the art results.