Learning by mirror averaging.
Anatoli Juditsky, Philippe Rigollet and Alexandre Tsybakov
Annals of Statistics
Given a finite collection of estimators or classifiers, we study
the problem of model selection type aggregation, i.e., we construct
a new estimator or classifier, called aggregate, which is nearly as
good as the best among them with respect to a given risk criterion.
We define our aggregate by a simple recursive procedure which solves
an auxiliary stochastic linear programming problem related to the
original non-linear one and constitutes a special case of the mirror
averaging algorithm. We show that the aggregate satisfies sharp
oracle inequalities under some general assumptions. The results are
applied to several problems including regression, classification and