Simple and SimplerSVM
S V N Vishwanathan, Alex Smola and Nicol Schraudolph
In: NIPS Workshop on Large Scale Kernel Machines, 05 - 11 December 2005, Vancouver, Canada.
We present a fast iterative support vector training algorithm for the
quadratic hard margin formulation. Our algorithm works by incrementally
changing a candidate support vector set using a locally greedy approach,
until the supporting hyperplane is found within a finite number of
It is derived by a simple (yet computationally crucial) modification of the
incremental SVM training algorithms of Cauwenberghs and Poggio which allows
us to perform update operations very efficiently, in particular when the
kernel matrix is rank-degenerate. Constant-time methods for initialization
of the algorithm and experimental evidence for the speed of the proposed
algorithm, when compared to methods such as Sequential Minimal Optimization
and the Nearest Point Algorithm are given. We also indicate methods to extend
our algorithm to the linear soft margin loss formulation. We present results
on a variety of real life datasets to validate our claims.