Expander walk sampling
From Wikipedia, the free encyclopedia
The expander walk sampling theorem, the earliest version of which is due to Ajtai-Komlós-Szemerédi and the more general version typically attributed to Gillman, states that sampling from an expander graph is almost as good as sampling independently.
[edit] Statement
Let G = (V,E) be an expander graph with normalized second-largest eigenvalue λ. Let n denote the number of vertices in G. Let
be a function on the vertices of G. Let μ = E[f] denote the true mean of f, i.e.
. Then, if we let
denote the vertices encountered in a k-step random walk on G starting at a random vertex Y0, we have the following for all γ > 0:
Here the Ω hides an absolute constant
. An identical bound holds in the other direction:
[edit] Uses
This lemma is useful in randomness reduction in the study of derandomization. Sampling from an expander walk is an example of a randomness-efficient sampler. Note that the number of bits used in sampling k independent samples from f is klogn, whereas if we sample from an infinite family of constant-degree expanders this costs only k + O(logn). Such families exist and are efficiently constructible, e.g. the Ramanujan graphs of Lubotzky-Phillips-Sarnak.
![\Pr[\frac{1}{k} \sum_{i=0}^k f(Y_i) - \mu > \gamma] \leq e^{-\Omega (\gamma^2 (1-\lambda) k)}.](../../../../math/a/a/1/aa16bbece8043624644ac07f04a5495b.png)
![\Pr[\frac{1}{k} \sum_{i=0}^k f(Y_i) - \mu < -\gamma] \leq e^{-\Omega (\gamma^2 (1-\lambda) k)}.](../../../../math/5/a/6/5a605853fd458c7528ded5a8fbfece07.png)

