SVM Multi-Task Learning and Non convex Sparsity Measure
Recently, there has been a lot of interest around multi-task learning (MTL) problem with the constraints that tasks should share common features. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework but instead we focus on lp −l2 (with p <= 1) mixed-norms as sparsity-inducing penalties. After having shown that the l1 − l2 MTL problem is a general case of Multiple Kernel Learning (MKL), we adapted the available efficient tools of solving MKL to the sparse MTL problem. Then, for the more general case when p < 1, the use of a DC program provides an iterative scheme solving at each iteration a weighted l1 − l2 sparse MTL problem.