PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

An Efficient Projection for L1,Infinity Regularization
Ariadna Quattoni, Xavier Carreras, Michael Collins and Trevor Darrell
In: ICML 2009, 6-7 June 2009, Montreal, Canada.

Abstract

In recent years the l1,∞ norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l1 framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l1,∞ regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l1,∞ ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l1,∞ leads to better performance than both l2 and l1 regularization and that it is is effective in discovering jointly sparse solutions.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:5653
Deposited By:Xavier Carreras
Deposited On:08 March 2010