PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Learning noisy linear classifiers via adaptive and selective sampling
Giovanni Cavallanti, Nicolò Cesa-Bianchi and Claudio Gentile
Machine Learning Volume 83, pp. 71-102, 2011.

Abstract

We introduce efficient margin-based algorithms for selective sampling and filtering in binary classification tasks. Experiments on real-world textual data reveal that our algorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribution, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, excluding logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N^{−(1+a (2+a)/2(3+a)} where N denotes the number of queried labels, and a > 0 is the exponent in the low noise condition. For all a > 0.73 this convergence rate is asymptotically faster than the rate N^{−(1+a)/(2+a)} achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for a growing to infinity (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:9199
Deposited By:Nicolò Cesa-Bianchi
Deposited On:21 February 2012