Semantic annotation of image groups with Self-Organizing Maps
Automatic image annotation has attracted a lot of attention recently as a method for facilitating semantic indexing and text-based retrieval of visual content. In this paper, we propose the use of multiple Self-Organizing Maps in modeling various semantic concepts and annotating new input images automatically. The effect of the semantic gap is compensated by annotating multiple images concurrently, thus enabling more accurate estimation of the semantic concepts' distributions. The presented method is applied to annotating images from a freely-available database consisting of images of different semantic categories.