Employing signed TV broadcasts for automated learning of British Sign Language
Patrick Buehler, Mark Everingham and Andrew Zisserman
In: Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, 22-23 May 2010, Valletta, Malta.
We present several contributions towards automatic recognition of BSL signs from continuous signing video sequences: (i) automatic detection and tracking of the hands using a generative model of the image; (ii) automatic learning of signs from TV broadcasts of single signers, using only the supervisory information available from subtitles; (iii) discriminative signer-independent sign recognition using automatically extracted training data from a single signer. Our source material consists of many hours of video with continuous signing and aligned subtitles recorded from BBC digital television. This is very challenging material visually in detecting and tracking the signer for a number of reasons, including self-occlusions, self-shadowing, motion blur, and in particular the changing background; it is also a challenging learning situation since the supervision provided by the subtitles is both weak and noisy.