Explicit error bounds for Markov chain Monte Carlo
Volume 485 / 2012
Abstract
We prove explicit, i.e. non-asymptotic, error bounds for Markov chain Monte Carlo methods. The problem is to compute the expectation of a function $f$ with respect to a measure $\pi$. Different convergence properties of Markov chains imply different error bounds. For uniformly ergodic and reversible Markov chains we prove a lower and an upper error bound with respect to $\def\norm#1#2{\Vert #1\Vert _{#2}}\norm{f}{2}$. If there exists an $L_2$-spectral gap, which is a weaker convergence property than uniform ergodicity, then we show an upper error bound with respect to $\def\norm#1#2{\Vert #1\Vert _{#2}}\norm{f}{p}$ for $p>2$. Usually a burn-in period is an efficient way to tune the algorithm. We provide and justify a recipe how to choose the burn-in period. The error bounds are applied to the problem of integration with respect to a possibly unnormalized density. More precise, we consider integration with respect to log-concave densities and integration over convex bodies. By the use of the Metropolis algorithm based on a ball walk and the hit-and-run algorithm it is shown that both problems are polynomial tractable.