Audio-Visual Feature Extraction for Semi-Automatic Annotation of Meetings
In this paper we present the building blocks of our semi-automatic annotation tool which supports multi-modal and multi-level annotation of meetings. The main focus is on the proper design and functionality of the modules for recognizing meeting actions. The key features, identity and position of the speakers, are provided by different modalities (audio and video). Three audio algorithms (Voice Activity Detection, Speaker Identification and Direction of Arrival) and three video algorithms (Detection, Tracking and Identification) form the low-level feature extraction components. Low-level features are automatically merged and the recognized actions are proposed to the user by visualizing them. The annotation labels are related but not limited to events during meetings. The user can finally confirm or if necessary, modify the suggestion, and then store the actions into a database.