Combining Appearance and Motion for Human Action Classification in Videos
Paramveer S. Dhillon, Sebastian Nowozin and Christoph Lampert
Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
We study the question of activity classification in videos and present a novel approach for recognizing human action categories in videos by combining information from appearance and motion of human body parts.
Our approach uses a tracking step which involves Particle Filtering and a local non - parametric clustering step. The motion information is provided by the trajectory of the cluster modes of a local set of particles. The statistical information about the particles of that cluster over a number of frames provides the appearance information. Later we use a "Bag of Words" model to build one histogram per video sequence from the set of these robust appearance and motion descriptors. These histograms provide us characteristic information which helps us to discriminate
among various human actions and thus classify them correctly.
We tested our approach on the standard KTH and Weizmann human action datasets and the results were comparable to the state of the art. Additionally our approach is able to distinguish between activities that involve the motion of complete body from those in which only certain body parts move. In other words, our method discriminates well between activities with "gross motion" like running, jogging etc. and "local motion" like waving, boxing etc.
|EPrint Type:||Monograph (Technical Report)|
|Project Keyword:||Project Keyword UNSPECIFIED|
|Deposited By:||Christoph Lampert|
|Deposited On:||24 March 2009|