PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

On Agnostic Boosting and Parity Learning
Adam Kalai, Yishay Mansour and Elad Verbin
In: STOC 2008, May, 2008, Voictoria, Canada.

Abstract

The motivating problem is agnostically learning parity func- tions, i.e., parity with arbitrary or adversarial noise. Specif- ically, given random labeled examples from an arbitrary dis- tribution, we would like to produce an hypothesis whose ac- curacy nearly matches the accuracy of the best parity func- tion. Our algorithm runs in time 2O(n= log n), which matches the best known for the easier cases of learning parities with random classication noise (Blum et al, 2003) and for ag- nostically learning parities over the uniform distribution on inputs (Feldman et al, 2006). Our approach is as follows. We give an agnostic boost- ing theorem that is capable of nearly achieving optimal ac- curacy, improving upon earlier studies (starting with Ben David et al, 2001). To achieve this, we circumvent previous lower bounds by altering the boosting model. We then show that the (random noise) parity learning algorithm of Blum et al (2000) ts our new model of agnostic weak learner. Our agnostic boosting framework is completely general and may be applied to other agnostic learning problems. Hence, it also sheds light on the actual diculty of agnostic learning by showing that full agnostic boosting is indeed possible.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:4470
Deposited By:Yishay Mansour
Deposited On:13 March 2009