Using diversity of errors for selecting members of a committee classifier
Diversity of classifiers is generally accepted as being necessary for combining them in a committee. Quantifying diversity of classifiers, however, is difficult as there is no formal definition thereof. Numerous measures have been proposed in literature, but their performance is often know to be suboptimal. Here several common methods are compared with a novel approach focusing on the diversity of the errors made by the member classifiers. Experiments with combining classifiers for handwritten character recognition are presented. The results show that the approach of diversity of errors is beneficial, and that the novel exponential error count measure is capable of consistently finding an effective member classifier set.