Improved risk tail bounds for on-line algorithms
This is the latest version of this eprint.
Tight bounds are derived on the risk of models in the ensemble generated by incremental training of an arbitrary learning algorithm. The result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments, and improves on previous bounds published by the same authors.
Available Versions of this Item