Improving Cox survival analysis with a neural-Bayesian approach
In this article we show that traditional Cox survival analysis can be improved upon when supplemented with sendible priors and analysed within a neural Bayesian framework. We demonstrate that the Bayesian method gives more reliable predictions,k in particular for relatively small data sets. The obtained posterior (the Probability distribution of network parameters given the data) which in itself is intractable, can be made accessible by several approximations. We review approximations by Hybrid Markov Chain Monte Carlo sampling, a variational method and the Laplace approximations. We argue that although each Bayesian approach circumvents the shortcomings of the oriinal Cox analysis, and therefore yields better predictive results, in practive the use of variational methods or Laplace is preferable. Since Cox survival analysis is infamous for its poor results with (too) many inputs, we use the Bayesian posterior to estimate p-values on the inputs and to formulate an algorithm for backward elimination. We show that after removal or irrelevant inputs and to formulate an algorithm for backward elimination. We show that after removal of irrelevant inputs Bayesian methods still achieve significantly better results than classical Cox.