More efficiency in multiple kernel learning
Alain Rakotomamonjy, Stéphane Canu and Yves Grandvalet
INSA Ruen, Rouen, France.
Recently, an efficient and general multiple kernel learning (MKL) al-
gorithm has been proposed . This approach has opened new perspectives
since it has turned the MKL approach tractable for large-scale problems.
However, it turns out that this iterative algorithm needs several iterations
before converging towards a reasonable solution. In this work, we address
this MKL problem through an adaptive 2-norm regularization formula-
tion. Weights on each kernel matrix are included in the standard SVM
empirical risk minimization problem and their sparsity has been forced
by means of an appropriate ℓ1 constraints. We propose an algorithm for
solving this problem and provide an new insight on MKL algorithms based
on block 1-norm regularization by showing that the two approaches are
equivalent. Experimental results show that the resulting algorithm con-
verges rapidly and its efficiency compares favorably to the state-of-the-art
multiple kernel algorithm.