PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Prediction with the SVM using test point margins
Sureyya Ozogur-Akyuz, Zakria Hussain and John Shawe-Taylor
Annals of Information Systems, Special Issue on Optimization methods in Machine Learning 2009.


Support vector machines (SVMs) carry out binary classification by constructing a maximal margin hyperplane between the two classes of observed (training) examples and then classifying test points according to the half-spaces in which they reside (irrespective of the distances that may exist between the test examples and the hyperplane). Cross-validation involves finding the one SVM model together with its optimal parameters that minimizes the training error and has good generalization in the future. In contrast, in this paper we collect all of the models found in the model selection phase and make predictions according to the model whose hyperplane achieves the maximum separation from a test point. This directly corresponds to the L_{\infty} norm for choosing SVM models at the testing stage. Furthermore, we also investigate other more general techniques corresponding to different L_p norms and show how these methods allow us to avoid the complex and time consuming paradigm of cross-validation. Experimental results demonstrate this advantage, showing significant decreases in computational time as well as competitive generalization error.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:5251
Deposited By:Zakria Hussain
Deposited On:24 March 2009