PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

A quasi-Newton approach to nonsmooth convex optimization
Jin Yu, Vishwanathan S V N, Simon Guenter and Nicol Schraudolph
In: 25th International Conference on Machine Learning, 5-9 July 2008, Helsinki, Finland.

Abstract

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting sub(L)BFGS algorithm to L2-regularized risk minimization with binary hinge loss, and its direction-finding component to L1-regularized risk minimization with logistic loss. In both settings our generic algorithms perform comparable to or better than their counterparts in specialized state-of-the-art solvers.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:4440
Deposited By:Jin Yu
Deposited On:24 March 2009