Spectral Algorithms for Supervised Learning
We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All these algorithms are consistent kernel methods which can be easily implemented. The intuition behind their derivation is that the same principle allowing to numerically stabilize a matrix inversion problem is crucial to avoid over-fitting. The various methods have a common derivation, but different computational and theoretical properties. We describe examples of such algorithms, analyzing their classification performance on several datasets and discussing their applicability to real world problems.