Adaptive Learning via the Balancing Principle
The choice of the regularization parameter is an important problem in supervised learning since the performance of most algorithms crucially depends on the choice of one or more of such parameters. In particular a main issue is the relationship between prior information on the problem and parameter choice, related to the problem of adaptive estimation. In this paper we present a strategy, the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function. Such a choice adaptively achieve the best error rate. Our main result applies to regularization algorithms in reproducing kernel Hilbert space with the square loss, though we also study how a similar principle can be used in other situations. Our results immediately give adaptive parameter choice for various kernel methods recently studied.