PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Non-sparse Multiple Kernel Learning
Marius Kloft, Ulf Brefeld, Pavel Laskov and Sören Sonnenburg
In: NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, December 2008, Whistler, Canada.


Approaches to multiple kernel learning (MKL) employ L1-norm constraints on the mixing coefficients to promote sparse kernel combinations. When features encode orthogonal characterizations of a problem, sparseness may lead to discarding useful information and may thus result in poor generalization performance. We study non-sparse multiple kernel learning by imposing an L2-norm constraint on the mixing coefficients. Empirically, L2-MKL proves robust against noisy and redundant feature sets and significantly improves the promoter detection rate compared to L1-norm and canonical MKL on large scales.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:4977
Deposited By:Ulf Brefeld
Deposited On:24 March 2009