PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Online learning of noisy data with kernels
Nicolò Cesa-Bianchi, Shai Shalev-Shwartz and Ohad Shamir
IEEE Transactions on Information Theory Volume 57, Number 12, pp. 7907-7931, 2011.

Abstract

We study online learning of linear and kernel-based predictors, when individual examples are corrupted by random noise, and both examples and noise type can be chosen adversarially and change over time. We begin with the setting where some auxiliary information on the noise distribution is provided, and we wish to learn predictors with respect to the squared loss. Depending on the auxiliary information, we show how one can learn linear and kernel-based predictors, using just 1 or 2 noisy copies of each example. We then turn to discuss a general setting where virtually nothing is known about the noise distribution, and one wishes to learn with respect to general losses and using linear and kernel-based predictors. We show how this can be achieved using a random, essentially constant number of noisy copies of each example. Allowing multiple copies cannot be avoided: Indeed, we show that the setting becomes impossible when only one noisy copy of each instance can be accessed. To obtain our results we introduce several novel techniques, some of which might be of independent interest.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:9197
Deposited By:Nicolò Cesa-Bianchi
Deposited On:21 February 2012