PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Beyond logarithmic bounds in online learning
Francesco Orabona, Nicolò Cesa-Bianchi and Claudio Gentile
In: Fifteenth International Conference on Artificial Intelligence and Statistics, April 21-23, 2012, La Palma, Canary Islands.


We prove logarithmic regret bounds that depend on the loss L_T^* of the competitor rather than on the number T of time steps. In the general online convex optimization setting, our bounds hold for any smooth and exp-concave loss (such as the square loss or the logistic loss). This bridges the gap between the O(\ln T) regret exhibited by exp-concave losses and the O(\sqrt{L_T^*}) regret exhibited by smooth losses. We also show that these bounds are tight for specific losses, thus they cannot be improved in general. For online regression with square loss, our analysis can be used to derive a sparse randomized variant of the online Newton step, whose expected number of updates scales with the algorithm's loss. For online classification, we prove the first logarithmic mistake bounds that do not rely on prior knowledge of a bound on the competitor's norm.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Theory & Algorithms
ID Code:8974
Deposited By:Claudio Gentile
Deposited On:21 February 2012