Smooth Object Retrieval using a Bag of Boundaries
R Arandjelovi and Andrew Zisserman
In: ICCV 2011, 6-13 November 2011, Barcelona.
We describe a scalable approach to 3D smooth object re-
trieval which searches for and localizes all the occurrences
of a user outlined object in a dataset of images in real time.
The approach is illustrated on sculptures.
A smooth object is represented by its material appear-
ance (sufficient for foreground/background segmentation)
and imaged shape (using a set of semi-local boundary de-
scriptors). The descriptors are tolerant to scale changes,
segmentation failures, and limited viewpoint changes. Fur-
thermore, we show that the descriptors may be vector quan-
tized (into a bag-of-boundaries) giving a representation that
is suited to the standard visual word architectures for imme-
diate retrieval of specific objects.
We introduce a new dataset of 6K images containing
sculptures by Moore and Rodin, and annotated with ground
truth for the occurrence of twenty 3D sculptures. It is
demonstrated that recognition can proceed successfully de-
spite changes in viewpoint, illumination and partial occlu-
sion, and also that instances of the same shape can be re-
trieved even though they may be made of differentmaterials.