On Adaptive Confidences for Critic-Driven Classifier Combining
When combining classifiers in order to improve the classification accuracy, precise estimation of the reliability of each member classifier can be very beneficial. One approach for estimating how confident we can be in the member classifiers’ results being correct is to use specialized critics to evaluate the classifiers’ performances. We introduce an adaptive, critic-based confidence evaluation scheme, where each critic can not only learn from the behavior of its respective classifier, but also strives to be robust with respect to changes in its classifier. This is accomplished via creating distribution models constructed from the classifier’s stored output decisions, and weighting them in a manner that attempts to bring robustness toward changes in the classifier’s behavior. Experiments with handwritten character classification showing promising results are presented to support the proposed approach.