Learning the Coordinate Gradients
In this paper we study the problem of learning the gradient function with application to variable selection and determining variable covariation. Firstly, we propose a novel unifying framework for coordinate gradient learning from the perspective of multi-task learning. Various variable selection algorithms can be regarded as special instances of this framework. Secondly, we formulate the dual problems of gradient learning with general loss functions. This enables the direct application of standard optimization toolboxes to the case of gradient learning. For instance, gradient learning with SVM loss can be solved by quadratic programming (QP) routines. Thirdly, we propose a novel gradient learning algorithm which can be cast as learning the kernel matrix problem. Its relation with sparse regularization is highlighted. A semi-infinitelinear programming (SILP) approach and an iterative optimization approach are pro- posed to e±ciently solve this problem. Finally, we validate our proposed approaches on both synthetic and real datasets.