## AbstractIn this technical report we investigate the relationship between generalization error bounds based on VC dimension and Rademacher penalties. We show that a version of the standard VC bound can be recovered from the Rademacher bound, thus providing a direct proof that Rademacher bounds are always at least as good as VC bounds (modulo a small constant factor). The proof highlights in a transparent way the properties of the learning sample that the Rademacher bound takes advantage of but the VC bound overlooks. This clarifies why and when Rademacher penalization yields better results than VC dimension bounds do. As a byproduct we get a new simple proof of the fact that the conditional expectation of Rademacher penalty can be upper bounded by a function of empirical shatter coefficients. Our empirical experiments show that the Rademacher bound can beat VC bounds even when the distribution generating the learning sample is as bad as can be.
[Edit] |