We analyze the regret, measured in terms of log loss, of the maximum likelihood (ML) sequential prediction strategy. This follow the leader'' strategy also defines one of the main versions of Minimum Description Length model selection. We proved in prior work for single parameter exponential family models that (a) in the misspecified case, the redundancy of follow-the-leader is \emph{not} $\half\log n+O(1)$, as it is for other universal prediction strategies; as such, the strategy also yields suboptimal individual sequence regret and inferior model selection performance; and (b) that in general it is not possible to achieve the optimal redundancy when predictions are constrained to the distributions in the considered model. Here we describe a simple flattening'' of the sequential ML and related predictors, that does achieve the optimal worst case \emph{individual sequence} regret of $(k/2)\log n+O(1)$ for $k$ parameter exponential family models for bounded outcome spaces; for unbounded spaces, we provide almost-sure results. Simulations show a major improvement of the resulting model selection criterion.