PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Stochastic Convex Optimization
Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro and Karthik Sridharan
In: COLT 2009(2009).


For supervised classification problems, it is well known that learnability is equivalent to uniform convergence of the empirical risks and thus to learnability by empirical minimization. Inspired by recent regret bounds for online convex optimization, we study stochastic convex optimization, and uncover a surprisingly different situation in the more general setting: although the stochastic convex optimization problem is learnable (e.g. using online-to-batch conversions), no uniform convergence holds in the general case, and empirical minimization might fail. Rather then being a difference between online methods and a global minimization approach, we show that the key ingredient is strong convexity and regularization. Our results demonstrate that the celebrated theorem of Alon et al on the equivalence of learnability and uniform convergence does not extend to Vapnik’s General Setting of Learning, that in the General Setting considering only empirical minimization is not enough, and that despite Vanpnik’s result on the equivalence of strict consistency and uniform convergence, uniform convergence is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:5408
Deposited By:Ohad Shamir
Deposited On:24 June 2009