PostsAbout me

Sampling from the posterior with Markov-chain Monte Carlo

John K. Kruschke’s book, titled Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan (2nd ed.) (Amazon, official site), gives a very quick and practical introduction to Bayesian analysis. Compared to BDA3, it contains less proofs, but also less jargon; more explanations that are informal, and more introductions to the basics. As such, I would recommend it to someone who hasn’t had much of an exposure to statistics yet, or is not a mathematician nor a programmer.

The book includes thorough and nicely visualized descriptions of multiple Markov-chain Monte Carlo methods for sampling from a posterior distribution, of which I’ll try to summarize the most basic one in this post.

Goal of sampling

Given the prior and the likelihood , we want samples from the posterior . In the following sections I’ll use the fact that the unnormalized posterior is equal to the prior multiplied with the likelihood: . Here, I’ll talk only about continuous probability spaces; discrete spaces can be sampled similarly.

Metropolis algorithm

Just like the other MC methods, the Metropolis algorithm starts with a seed value for – let’s call it . (I assume in practice is sampled from the prior.) Then, once you have a seed value , repeat the following two steps for a prespecified number of iterations, or until an effective sample size is achieved.

  1. Sample from a proposal distribution around , which could be a Gaussian: .

2.

  • If – i.e. if – then accept the proposed parameter value: .
  • Otherwise, the probability of accepting the proposed parameter is the ratio of the posterior at the proposed value and at the current value; otherwise, reject it:

It can be proven that after a so-called “burn-in” period, the probability of any value will be the posterior probability: if , therefore if you do the procedure long enough, you’ll end up with many samples from the posterior. Note that the effective sample size will be much lower than , because neighboring samples are strongly correlated, so we have to drop most of the values so obtained.

The beauty of this algorithm is that during this whole procedure, we only need to be able to compute the unnormalized posterior – so the algorithm can be easily used for sampling using the prior and the likelihood, even when the model is specified up to a multiplicative constant (as in an undirected graphical model).

This algorithm doesn’t easily escape a “probability island” – i.e. a region that is surrounded with a wide region of probability 0. (Although if the proposal distribution is wide enough, then the algorithm is theoretically able to make that jump eventually, which maybe in practice “approximately never”.)

One downside of this basic algorithm is that the proposal distribution needs to be fine-tuned for the individual application: differences in effective sample size can be orders of magnitudes, even for a simple distribution (i.e. a 1-dimensional unimodal distribution with finite support).

Another downside is that in multiple dimensions this random walk is quite inefficient, and even more dependent on a correct choice of the covariance matrix – but apart from the obvious reason that “high-dimensional spaces are big”, I couldn’t tell why.

The well-known Metropolis–Hastings algorithm, Gibbs sampling and Hamiltonian Monte Carlo are different twists on this core idea, and they are also described in the book.

Allegedly, credit for this method is due more to Marshall and Arianna Rosenbluth – if there is agreement on that, we could rename it to Rosenbluthsian Monte Carlo.

For more information…

If you want to learn about sampling, or Bayesian data analysis, consider reading the book, it’s a great read from what I’ve read so far.

Stay tuned for more of Bayes, or Curry, or Euler, or McCarthy.