Learning visual attributes
Vittorio Ferrari and Andrew Zisserman
In: NIPS 2007, 3-6 Dec 2007, Vancouver, Canada.
We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as `red', `striped', or `spotted'. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio.
As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.
|EPrint Type:||Conference or Workshop Item (Poster)|
|Project Keyword:||Project Keyword UNSPECIFIED|
|Deposited By:||Mudigonda Pawan Kumar|
|Deposited On:||27 December 2007|