Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection (JML)
Koji Tsuda, Gunnar Rätsch and Manfred Warmuth
Journal of Machine Learning Research
We address the problem of learning a symmetric positive definite matrix. The central issue
is to design parameter updates that preserve positive definiteness. Our updates are motivated
with the von Neumann divergence. Rather than treating the most general case, we
focus on two key applications that exemplify our methods: on-line learning with a simple
square loss, and finding a symmetric positive definite matrix subject to linear constraints.
The updates generalize the Exponentiated Gradient (EG) update and AdaBoost, respectively:
the parameter is now a symmetric positive definite matrix of trace one instead of a
probability vector (which in this context is a diagonal positive definite matrix with trace
one). The generalized updates use matrix logarithms and exponentials to preserve positive
definiteness. Most importantly, we show how the analyzes of the original EG update and
AdaBoost generalize to the non-diagonal case. We apply the new versions of both, called
the Matrix Exponentiated Gradient (MEG) update and DefiniteBoost, to learn a kernel
matrix from distance measurements.