Stochastic Methods for $\ell_1$ Regularized Loss Minimization
Shai Shalev-Shwartz and Ambuj Tewari
In: ICML 2009, 15-17 Jun 2009, Montreal.
We describe and analyze two stochastic methods for $\ell_1$
regularized loss minimization problems, such as the Lasso. The
first method updates the weight of a single feature at each
iteration while the second method updates the entire weight vector
but only uses a single training example at each iteration. In both
methods, the choice of feature/example is uniformly at random. Our
theoretical runtime analysis suggests that the stochastic methods
should outperform state-of-the-art deterministic approaches,
including their deterministic counterparts, when the size of the
problem is large. We demonstrate the advantage of stochastic methods
by experimenting with synthetic and natural data sets.