PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition
Ron Begleiter and Ran El-Yaniv
Journal of Machine Learning Research Volume 7, pp. 379-411, 2006.

Abstract

We present worst case bounds for the learning rate of a known prediction method that is based on hierarchical applications of binary Context Tree Weighting (CTW) predictors. A heuristic application of this approach that relies on Huffman's alphabet decomposition is known to achieve state-of-the-art performance in prediction and lossless compression benchmarks. We show that our new bound for this heuristic is tighter than the best known performance guarantees for prediction and lossless compression algorithms in various settings. This result substantiates the e±ciency of this hierarchical method and provides a compelling explanation for its practical success. In addition, we present the results of a few experiments that examine other possibilities for improving the multi-alphabet prediction performance of CTW-based algorithms.

PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Theory & Algorithms
ID Code:1499
Deposited By:Ran El-Yaniv
Deposited On:28 November 2005