A Local Basis Representation for Estimating Human Pose from Cluttered Images
Ankur Agarwal and William Triggs
In: Asian Conference on Computer Vision 2006, 13-16 Jan 2006, Hyderabad, India.
Recovering the pose of a person from single images is a challenging problem. This paper discusses a bottom-up approach that uses local image features to estimate human upper body pose from single images in cluttered backgrounds. The method takes the image window with a dense grid of local gradient orientation histograms, followed by non negative matrix factorization to learn a set of bases that correspond to local features on the human body, enabling selective encoding of human-like features in the presence of background clutter. Pose is then recovered by direct regression. This approach allows us to key on gradient patterns such as shoulder contours and bent elbows that are characteristic of humans and carry important pose information, unlike current regressive methods that either use weak limb detectors or require prior segmentation to work. The system is trained on a database of images with labelled poses. We show that it estimates pose with similar performance levels to current example-based methods, but unlike them it works in the presence of natural backgrounds, without any prior segmentation.
|EPrint Type:||Conference or Workshop Item (Oral)|
|Project Keyword:||Project Keyword UNSPECIFIED|
|Deposited By:||Ankur Agarwal|
|Deposited On:||28 November 2005|