PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Stochastic Methods for l1-regularized Loss Minimization
Shai Shalev-Shwartz and Ambuj Tewari
Journal of Machine Learning Research 2011.

Abstract

We describe and analyze two stochastic methods for ℓ1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature or example is uniformly at random. Our theoretical runtime analysis suggests that the stochastic methods should outperform state-of-the-art deterministic approaches, including their deterministic counterparts, when the size of the problem is large. We demonstrate the advantage of stochastic methods by experimenting with synthetic and natural data sets.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:8908
Deposited By:Shai Shalev-Shwartz
Deposited On:21 February 2012