Most methods for decision-theoretic online learning are based on the Hedge algo- rithm, which takes a parameter called the learning rate. In most previous analyses the learning rate was carefully tuned to obtain optimal worst-case performance, leading to suboptimal performance on easy instances, for example when there ex- ists an action that is signiﬁcantly better than all others. We propose a new way of setting the learning rate, which adapts to the difﬁculty of the learning prob- lem: in the worst case our procedure still guarantees optimal performance, but on easy instances it achieves much smaller regret. In particular, our adaptive method achieves constant regret in a probabilistic setting, when there exists an action that on average obtains strictly smaller loss than all other actions. We also provide a simulation study comparing our approach to existing methods.