Model Selection: Beyond the Bayesian/Frequentist Divide
Isabelle Guyon, Amir Saffari, Gideon Dror and Gavin Cawley
The principle of parsimony also known as “Ockham’s razor” has inspired many theories of model selection. Yet such theories, all making arguments in favor of parsimony, are based on very different
premises and have developed distinct methodologies to derive algorithms. We have organized challenges and edited a special issue of JMLR and several conference proceedings around the theme of
model selection. In this editorial, we revisit the problem of avoiding overﬁtting in light of the latest results. We note the remarkable convergence of theories as different as Bayesian theory, Minimum
Description Length, bias/variance tradeoff, Structural Risk Minimization, and regularization, in some approaches. We also present new and interesting examples of the complementarity of theories leading to hybrid algorithms, neither frequentist, nor Bayesian, or perhaps both frequentist and Bayesian!