Method for visualisation and analysis of hand and head movements in sign language video
This paper presents a method for the visualisation and motion analysis of hand and head movements in videos containing sign language and gestures. The method detects the parts of the person's bare skin on a video with an adaptive colour model, characterises the shapes of the hands and the head with a point distribution model, and tracks their motion separately by using the Kanade-Lucas-Tomasi algorithm and active shape models. The quantitative results are visualised in ELAN annotation software. The method is demonstrated in the paper in terms of its relevance to the annotation and analysis of sign language.