Our website is made possible by displaying online advertisements to our visitors.
Please consider supporting us by disabling your ad blocker.

Responsive image


Bayes factor

The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other.[1] The models in question can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values).[2] Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.[3]

Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.[4] Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested.[5] For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative.[6][7] Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC);[8] in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.

  1. ^ Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N. (2016). "The philosophy of Bayes factors and the quantification of statistical evidence". Journal of Mathematical Psychology. 72: 6–18. doi:10.1016/j.jmp.2015.11.001.
  2. ^ Lesaffre, Emmanuel; Lawson, Andrew B. (2012). "Bayesian hypothesis testing". Bayesian Biostatistics. Somerset: John Wiley & Sons. pp. 72–78. doi:10.1002/9781119942412.ch3. ISBN 978-0-470-01823-1.
  3. ^ Ly, Alexander; et al. (2020). "The Bayesian Methodology of Sir Harold Jeffreys as a Practical Alternative to the P Value Hypothesis Test". Computational Brain & Behavior. 3 (2): 153–161. doi:10.1007/s42113-019-00070-x. hdl:2066/226717.
  4. ^ Llorente, Fernando; et al. (2023). "Marginal likelihood computation for model selection and hypothesis testing: an extensive review". SIAM Review. to appear: 3–58. arXiv:2005.08334. doi:10.1137/20M1310849. S2CID 210156537.
  5. ^ Congdon, Peter (2014). "Estimating model probabilities or marginal likelihoods in practice". Applied Bayesian Modelling (2nd ed.). Wiley. pp. 38–40. ISBN 978-1-119-95151-3.
  6. ^ Koop, Gary (2003). "Model Comparison: The Savage–Dickey Density Ratio". Bayesian Econometrics. Somerset: John Wiley & Sons. pp. 69–71. ISBN 0-470-84567-8.
  7. ^ Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul (2010). "Bayesian hypothesis testing for psychologists: A tutorial on the Savage–Dickey method" (PDF). Cognitive Psychology. 60 (3): 158–189. doi:10.1016/j.cogpsych.2009.12.001. PMID 20064637. S2CID 206867662.
  8. ^ Ibrahim, Joseph G.; Chen, Ming-Hui; Sinha, Debajyoti (2001). "Model Comparison". Bayesian Survival Analysis. Springer Series in Statistics. New York: Springer. pp. 246–254. doi:10.1007/978-1-4757-3447-8_6. ISBN 0-387-95277-2.

Previous Page Next Page