PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

How to better use expert advice
Rani Yaroshinsky, Ran El-Yaniv and Steven S. Seiden
Machine Learning Volume 55, Number 3, pp. 271-309, 2004.

Abstract

This paper is concerned with online learning from expert advice. Extensive work on this problem generated numerous ``expert advice algorithms'' whose total loss is provably bounded above in terms of the loss incurred by the best expert in hindsight. Such algorithms were devised for various problem variants corresponding to various loss functions. For some loss functions, such as the square, Hellinger and entropy losses, \emph{optimal} algorithms are known. However for two of the most widely used loss functions, namely the 0/1 and absolute loss, there are still gaps between the known lower and upper bounds. In this paper we present two new expert advice algorithms and prove for them the best known 0/1 and absolute loss bounds. Given an expert advice algorithm $\alg$, the goal is to form an upper bound on the \emph{regret} $L_{\textsc{alg}} - L^*$ of $\alg$, where $L_{\textsc{alg}}$ is the loss of $\alg$ and $L^*$ is the loss of the best expert in hindsight. Typically, regret bounds of a ``canonical form'' $C\cdot\sqrt{L^* \ln N}$ are sought where $N$ is the number of experts and $C$ is a constant. So far, the best known constant for the absolute loss function is $C=2.83$, which is achieved by the recent $\iAWM$ algorithm of \cite{Gentile00}. For the 0/1 loss function no bounds of this canonical form are known and the best known regret bound is $L_{\textsc{alg}} - L^* \leq L^* + C_1\ln N + C_2 \sqrt{L^* \ln N + \frac{e}{4}\ln^2 N}$ where $C_1 = e-2$ and $C_2 = 2\sqrt{e}$. This bound is achieved by a ``P-norm'' algorithm of \cite{GentileL99}. Our first algorithm is a randomized extension of the ``guess and double'' algorithm of \cite{CesaBianchiFHHSW97}. While the guess and double algorithm achieves a canonical regret bound with $C=3.32$, the \emph{expected} regret of our randomized algorithm is canonically bounded with $C=2.49$ for the absolute loss function. The algorithm utilizes one random choice at the start of the game. Like the deterministic guess and double algorithm, a deficiency of our algorithm is that it occasionally restarts itself and therefore ``forgets'' what it learned. Our second algorithm does not forget and enjoys the best known asymptotic performance guarantees for both the absolute and 0/1 loss functions. Specifically, in the case of the absolute loss, our algorithm is canonically bounded with $C$ approaching $\sqrt{2}$ and in the case of the 0/1 loss, with $C$ approaching $3/\sqrt{2} \approx 2.12$. In the 0/1 loss case the algorithm is randomized and the bound is on the expected regret.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:928
Deposited By:Ran El-Yaniv
Deposited On:07 January 2005