PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Stochastic Convex Optimization
Shai Shalev-Shwartz, Ohad Shamir, Karthik Sridharan and Nathan Srebro
In: COLT 2009, 18-20 Jun 2009, Montreal.

Abstract

For supervised classification problems, it is well known that learnability is equivalent to uniform convergence of the empirical risks and thus to learnability by empirical minimization. Inspired by recent regret bounds for online convex optimization, we study stochastic convex optimization, and uncover a surprisingly different situation in the more general setting: although the stochastic convex optimization problem is learnable (e.g. using online-to-batch conversions), no uniform convergence holds in the general case, and empirical minimization might fail. Rather then being a difference between online methods and a global minimization approach, we show that the key ingredient is strong convexity and regularization. Our results demonstrate that the celebrated theorem of Alon \etal~on the equivalence of learnability and uniform convergence does not extend to Vapnik's General Setting of Learning, that in the General Setting considering only empirical minimization is not enough, and that despite Vanpnik's result on the equivalence of {\em strict} consistency and uniform convergence, uniform convergence is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:5419
Deposited By:Shai Shalev-Shwartz
Deposited On:02 July 2009