## AbstractWe propose large margin classifiers that are sometimes better than Support Vector Machines (SVMs) for high-dimensional classification problems with limited numbers of training samples. The basic idea is to approximate each class with a convex model of some form based on its training samples. For any pair of models of this form, there is a corresponding linear classifier that maximizes the margin between the models, and this can be found efficiently by solving a convex program that finds the closest pair of points in the two sets. If the classes are modeled with the convex hulls of their samples the result is the standard SVM, but many other convex models are possible. As examples we investigate maximum margin classifiers based on affine hull and bounding hyperdisk models. These methods can also be kernelized by working in orthonormal coordinates on the subspace of feature space spanned by the training samples. We compare the resulting margin-between-convex-model methods to SVM and to the corresponding nearest-convex-model classifiers on several data sets, showing that they are often competitive with these well-established approaches.
[Edit] |