Coming back to the Sunday Reading Notes, this week I discuss the paper ‘*A Geometric Interpretation of the Metropolis-Hastings Algorithm’ *by Louis J. Billera and Persi Diaconis from Statistical Science. This paper is suggested to me by Joe Blitzstein.

In Section 4 of ‘Informed proposals for local MCMC in discrete spaces’ by Giacomo Zanella (see my SRN Part I and II), Zanella mentions that the Metropolis-Hasting acceptance probability function(APF) is not the only APF that makes the resulting kernel -reversible as long as detailed-balance is satisfied. This comes first as a ‘surprise’ to me as I have never seen another APF in practice. But very quickly I realize that this fact was mentioned in both Stat 213 & Stat 220 at Harvard and I have read about it from Section 5.3 – ‘*Why Does the Metropolis Algorithm Work?*‘ of ‘*Monte Carlo Strategies in Scientific Computing*‘ by Jun S. Liu. Unfortunately, I did not pay enough attention. Joe suggested this article to me after I posted on Facebook about being upset with not knowing such a basic fact.

In this Billera and Diaconis paper, the authors focus on the finite state space case and considers the MH kernel as the projection of stochastic matrices (row sums are all 1 and all entries are non-negative, denoted by) onto the set of -reversible Markov chains (stochastic matrices that satisfy detailed balance , denoted by If we introduce a metric on the stochastic matrices: .

The key result in this paper is Theorem 1. The authors prove that the Metropolis maps minimizes the distance from the proposal kernel to This means that is the unique closest element in that is coordinate-wise smaller than on its off-diagonal entries. So is in a sense **the closest reversible kernel to the original kernel** .

I think this geometric interpretation offers great intuition about how the MH algorithm works: we start with a kernel and change it to another kernel with stationary distribution . And the change must occur as follows:

from , choose from and decide to accept or stay at ; this last choice may be stochastic with acceptance probabilty . This gives the new chain with transition probabilities: , x \not =y$. The diagonal entries are changed so that each row sums to 1.

Indeed the above procedure describes how the MH algorithm works. If we insist on -reversibility, we must have where So the MH choice of APF is one that maximizes the chance of moving from to . The resulting MH kernel has the largest spectral gap (1 – second largest eigenvalue) and by Peksun’s theorem must have the minimum asymptotic variance estimating additive functionals.

In Remark 3.2, the authors point out if we consider only APF that are functions of , then the function must satisfy which is the characteristic of balancing functions in Zanella’s ‘informed proposals’ paper.

This paper allows me to study Metropolis-Hastings algorithm from another angle and review facts I have neglected in my coursework.

References:

- Billera, L. J., & Diaconis, P. (2001). A geometric interpretation of the Metropolis-Hastings algorithm.
*Statistical Science*, 335-339. - Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces.
*arXiv preprint arXiv:1711.07424*. - Liu, J. S. (2008).
*Monte Carlo strategies in scientific computing*. Springer Science & Business Media.