PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Prediction with the SVM using test point margins
Sureyya Ozogur, Zakria Hussain and John Shawe-Taylor
In: Annals of Information Systems, Special Issue on Data Mining Integrated Series in Information Systems . (2008) Springer . ISBN 1934-3221


Support vector machines (SVMs) carry out binary classification by constructing a maximal margin hyperplane between the two classes of observed (training) examples and then classifying test points according to the half-spaces in which they reside (irrespective of the distances that may exist between the test examples and the hyperplane). Cross-validation involves finding the one SVM model together with its optimal parameters that minimizes the training error and has good generalization in the future. In contrast, in this paper we collect all of the models found in the model selection phase and make predictions according to the model whose hyperplane achieves the maximum separation from a test point. This directly corresponds to the L¥ norm for choosing SVM models at the testing stage. Furthermore, we also investigate other more general techniques corresponding to different Lp norms and show how these methods allow us to avoid the complex and time consuming paradigm of cross-validation. Experimental results demonstrate this advantage, showing significant decreases in computational time as well as competitive generalization error.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Book Section
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:4781
Deposited By:Sureyya Ozogur
Deposited On:24 March 2009