Hand detection using multiple proposals
Arpit Mittal, Andrew Zisserman and Philip Torr
In: BMVC 2011, 29 August - 2 September 2011, Dundee.
We describe a two-stage method for detecting hands and their orientation in unconstrained images. The first stage uses three complementary detectors to propose hand
bounding boxes. Each bounding box is then scored by the three detectors independently, and a second stage classifier learnt to compute a final confidence score for the proposals
using these features.
We make the following contributions: (i) we add context-based and skin-based proposalsto a sliding window shape based detector to increase recall; (ii) we develop a new
method of non-maximum suppression based on super-pixels; and (iii) we introduce a fully annotated hand dataset for training and testing.
We show that the hand detector exceeds the state of the art on two public datasets, including the PASCAL VOC 2010 human layout challenge.