PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Linear classification and selective sampling under low noise conditions.
Giovanni Cavallanti, Nicolo' Cesa-Bianchi and Claudio Gentile
Proc. of the 22nd conference on Neural Information processing Systems (NIPS 2008) 2008.

Abstract

We provide a new analysis of an efficient margin-based algorithm for selective sampling in classification problems. Using the so-called Tsybakov low noise con- dition to parametrize the instance distribution, we show bounds on the conver- gence rate to the Bayes risk of both the fully supervised and the selective sampling versions of the basic algorithm. Our analysis reveals that, excluding logarithmic factors, the average risk of the selective sampler converges to the Bayes risk at rate N^(−(1+α)(2+α)/2(3+α)) where N denotes the number of queried labels, and α > 0 is the exponent in the low noise condition. For all α > \sqrt(3) − 1 ≈ 0.73 this convergence rate is asymptotically faster than the rate N^(−(1+α)/(2+α)) achieved by the fully supervised version of the same classifier, which queries all labels, and for α → ∞ the two rates exhibit an exponential gap. Experiments on textual data reveal that simple variants of the proposed selective sampler perform much better than popular and similarly efficient competitors.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Theory & Algorithms
ID Code:4743
Deposited By:Claudio Gentile
Deposited On:24 March 2009