Supervised Learning of Gaussian Mixture Models for Visual Vocabulary Generation
Basura Fernando, Elke Rupp, Damien Muselet and Marc Sebban
The creation of semantically relevant clusters is vital in bag-of-visual words models which are known to be very successful to achieve image classification tasks.
Generally, unsupervised clustering algorithms, such as K-means, are employed to create such clusters from which visual dictionaries are deduced. K-means achieves a hard assignment by associating each image descriptor to the cluster with the nearest mean. By this way, the within-cluster sum of squares of distances is minimized. A limitation of this approach in the context of image classification is that it usually does not use any supervision that limits the discriminative power of the resulting visual words (typically the centroids of the clusters). More recently, some supervised dictionary creation methods based on both supervised information and data fitting were proposed leading to more discriminative visual words. But, none of them consider the uncertainty present
at both image descriptor and cluster levels. In this paper, we propose a supervised learning algorithm based on a Gaussian Mixture model which not only generalizes the K-means algorithm by allowing soft assignments, but also exploits supervised information to improve the discriminative power of the clusters.
Technically, our algorithm aims at optimizing, using an EM-based approach, a convex combination of two criteria: the first one is unsupervised and based on the likelihood of the training data; the second is supervised and takes into account the purity of the clusters. We show on two well known datasets that our method is able to create more relevant clusters by comparing its behavior with the state of the art dictionary creation methods.