PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Multi-Task Feature Learning
Andreas Argyriou, Theodoros Evgeniou and Massimiliano Pontil
In: Neural Information Processing Systems, 4-7 Dec 2006, Vancouver, Canada.

Abstract

We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well-known $1$-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn common-across-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select -- not learn -- a few common features across the tasks.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Poster)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
ID Code:2467
Deposited By:Andreas Argyriou
Deposited On:22 November 2006