Computable Bayesian Compression for Uniformly Discretizable Statistical Models
Supplementing Vovk and V’yugin’s ‘if’ statement, we show that Bayesian compression provides the best enumerable compression for parameter-typical data if and only if the parameter is Martin-L¨of random with respect to the prior. The result is derived for uniformly discretizable statistical models, introduced here. They feature the crucial property that given a discretized parameter, we can compute how much data is needed to learn its value with little uncertainty. Exponential families and certain nonparametric models are shown to be uniformly discretizable.