Learning Efficiently with Approximate Inference via Dual Losses
Ofer Meshi, David Sontag, Tommi Jaakkola and Amir Globerson
In: 27th International Conference on Machine Learning (ICML) 2010, 21-24 June 2010, Haifa, Israel.
Many structured prediction tasks involve complex models where inference is computationally intractable, but where it can be well approximated using a linear programming relaxation. Previous approaches for learning for structured prediction (e.g., cutting-plane, subgradient methods, perceptron) repeatedly make predictions for some of the data points. These approaches are computationally demanding because each prediction involves solving a linear program to optimality.
We present a scalable algorithm for learning for structured prediction.
The main idea is to instead solve the dual of the structured prediction loss. We formulate the learning task as a convex minimization over both the weights and the dual variables corresponding to each data point. As a result, we can begin to optimize the weights even before completely solving any of the individual prediction problems. We show how the dual variables can be efficiently optimized using coordinate descent.
Our algorithm is competitive with state-of-the-art methods such as stochastic subgradient and cutting-plane.