Learnability, Stability and Uniform Convergence
Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro and Karthik Sridharan
Journal of Machine Learning Research
The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classiﬁcation and regression, is that
learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a
problem is learnable, it is learnable via empirical risk minimization. In this paper, we consider the General
Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases.
We show that in this setting, there are non-trivial learning problems where uniform convergence does not
hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms. Instead of
uniform convergence, we identify stability as the key necessary and sufﬁcient condition for learnability. Moreover, we show that the conditions for learnability in the general setting are signiﬁcantly more complex than in
supervised classiﬁcation and regression.