## AbstractWe derive a general Convex Linearly Constrained Program (CLCP) parameterized by a matrix $\G$, constructed from the information given by the input-output pairs. The CLCP then chooses a set of regularization and loss functions in order to impose constraints for the learning task. We show that several algorithms, including the SVM, LPBoost, Ridge Regression etc., can be solved using the same optimization framework when the appropriate choice of $\G$, regularization and loss function are made. Due to this unification we show that if $\G$ is constructed from more complex input-output paired information then we can solve more difficult problems such as structured output learning, with the same complexity as a regression/classification problem. We discuss various different forms of $\G$ and then show on some real world enzyme prediction tasks, requiring structured outputs, that our method performs as well as the state-of-the-art.
[Edit] |