PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Learning layered motion segmentations of video
Mudigonda Pawan Kumar, Philip Torr and Andrew Zisserman
In: ICCV 2005, 17-20 Oct 2005, Beijing, China.

Abstract

We present an unsupervised approach for learning a generative layered representation of a scene from a video for motion segmentation. The learnt model is a composition of layers, which consist of one or more segments. Included in the model are the effects of image projection, lighting, and motion blur. The two main contributions of our method are: (i) A novel algorithm for obtaining the initial estimate of the model using efficient loopy belief propagation; (ii) Using $\alpha\beta$-swap and $\alpha$-expansion algorithms, which guarantee a strong local minima, for refining the initial estimate. Results are presented on several classes of objects with different types of camera motion. We compare our method with the state of the art and demonstrate significant improvements.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Oral)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Machine Vision
ID Code:1077
Deposited By:Mudigonda Pawan Kumar
Deposited On:08 September 2005