Proving the Coupling Inequality with LP Duality

This post details the first in a collection of everyday inequalities that arise as special cases of duality in linear programming. We’ll show that the usual coupling inequality is such an example,

d_{TV}(\mu, \nu) \leq \Pr[X \neq Y]

coupling of two distributions \mu and \nu on \Omega is a joint distribution on \Omega \times \Omega such that the two marginal distributions are \mu and \nu respectively. We write (X, Y) for the joint random variable, with X the marginal in the first coordinate, and Y the marginal in the second coordinate.  We say that \mu and \nu are “coupled”, with the coupling (X, Y) defining the interaction, or dependence, between them.

One of the basic facts about couplings compares the statistical distance between \mu and \nu to the probability the coupled variables agree. For any coupling (X, Y) of \mu and \nu, we have

d_{TV}(\mu, \nu) \leq \Pr[X \neq Y]

Furthermore, there is a coupling that achieves this bound. The above inequality is typically used to upper bound the statistical distance between \mu and \nu by constructing a coupling that’s likely to agree e.g. in the proof of convergence of an ergodic Markov chain to its limiting distribution.

In this post we’ll prove this inequality (and its tightness!) using an LP duality argument.

The Coupling LP

A coupling is specified by a set of linear constraints: for x,y \in \Omega, let p_{x,y} be the probability of seeing the pair (x,y). The constraints are

\forall x. \quad \displaystyle\sum_{y \in \Omega} p_{x,y} = \mu(x)

\forall y. \quad \displaystyle\sum_{x \in \Omega} p_{x,y} = \nu(y)

\forall x,y. \quad p_{x,y} \geq 0

\displaystyle\sum_{x,y \in \Omega} p_{x,y} = 1

This defines a polytope in \mathbb{R}^{|\Omega|^2}, called a transportation polytope. In fact, the last constraint is redundant, as summing the constraints with \mu(x) over all x‘s will also give 1. Over this polytope, the linear constraint to minimize is the probability X and Y disagree,

\min\displaystyle\sum_{\substack{x,y \in \Omega \\ x \neq y}} p_{x, y}

Ok, now that we have the LP, its dual (whose computation is explained e.g. here… or perhaps in a future blog post) is

\max \displaystyle\sum_{\omega \in \Omega} q^\mu_\omega \mu(\omega) + q^\nu_\omega \nu(\omega)

\forall \omega. \quad q^\mu_\omega + q^\nu_\omega \leq 0

\forall \omega_1 \neq \omega_2. \quad q^\mu_\omega + q^\nu_\omega \leq 1

By the duality theorems, \Pr[X \neq Y] is always at least as large as the solution to the dual, and furthermore, for some X, Y the two solutions agree. To finish our argument, we’ll show that the value of the dual LP is the statistical distance between \mu and \nu. The definition of the statistical distance we’ll use is the “optimization” version: the maximum difference in probabilities of events in \Omega,

d_{TV}(\mu, \nu) = \max_{A \subseteq \Omega} |\mu (A) - \nu (A)|

= \max_{A \subseteq \Omega} \left|\displaystyle\sum_{a \in A} \mu(a) - \nu(a)\right|

First we argue that the optimum is \{-1, 0, +1\}-valued. The dual problem’s matrix is totally unimodular (Prop. 3.2) and the right-hand sides of the constraints are integral, hence the LP is integral. Fix an optimal solution q^{\mu*}_\omega. From the constraints we see only one of the sets \{q^{\mu*}_\omega : \omega \in \Omega\} and \{q^{\nu*}_\omega : \omega \in \Omega\} can contain positive values. WLOG these are the variables for \mu. By optimality and looking at the largest (resp. smallest) of the q^{\mu*}_\omega (resp. q^{\nu*}_\omega), the \{q^{\mu*}_\omega : \omega \in \Omega \} are within 1 of one another, and the q^{\nu*}_\omega are their negations. Now we can shift the solution to be \{-1, 0, +1\}-valued without changing the value.

Now we are in a position to compare the \{-1, 0, +1\}-optimum with d_{TV}(\mu, \nu). As noted above, the set of variables for exactly one of \mu, \nu is positive, which gives the sign in the absolute value. The +1-valued variables indicate the event to take. This shows the value of the dual is exactly d_{TV}(\mu, \nu). As explained, this finishes the proof that

d_{TV}(\mu, \nu) \leq \Pr[X \neq Y]

 

Transportation Polytopes

We finish with a few extra details on transportation polytopes.

Fixing in mind \mu and \nu, the collection of possible couplings Z forms a convex polytope in \mathbb{R}^{|\Omega|^2} from a very special class of polytopes, the transportation polytopes. The name comes from the following “transportation” problem in optimization: suppose you have m factories producing a good, such as lawn mowers, which produce quantities s_1, s_2, \dots, s_m. You want to distribute the lawn mowers among n stores in quantities t_1, t_2, \dots, t_n. The question is, how many lawn mowers should factory i send to store j? A solution that achieves the desired stocking quantity is exactly a matrix (a.k.a joint probability distribution) with the given marginals.

Vertices of a transportation polytope can be enumerated and checked algorithmically. Furthermore, when the marginals \mu and \nu are uniform, the corresponding transportation polytope is exactly the set of doubly stochastic matrices. By the classical Birkhoff-Von Neumann theorem, the vertices of this polytope are exactly the permutation matrices i.e. doubly stochastic matrices are exactly convex combinations of permutation matrices (along with the zero matrix).

Syntactic Translations of Berry-Esséen and the CLT

Probably the most studied problem in probability theory is the following: suppose X_1, X_2, X_3, \dots, X_n are independent, identically distributed random variables from some distribution \mathcal{D} on \mathbb{R}, and we look at the random variable

S_n = X_1 + X_2 + \cdots + X_n

How is S_n distributed?

Typically the answer is the central limit theorem (CLT) amplified with the Berry-Esséen theorem, “a sum of independent variables is approximately normally distributed”. This post explains two symbolic translations of that sentence which I find easier to write down at a moment’s notice. The first takes the viewpoint of “asymptotic approximation of S_n“,

S_n = n\mu + \sqrt{n} \mathcal{N}(0, \sigma^2) + O(1)

Here \mu is the mean of \mathcal{D} and \sigma is the standard deviation of \mathcal{D}. The second is

S_n = Y_1 + Y_2 + \cdots + Y_n + O(1)

where the Y_i are i.i.d Gaussian random variables with the same mean and variance as the X_i, Y_i \sim \mathcal{N}(\mu, \sigma^2). This can be thought of as an “invariance principle” or “replacement lemma” (and this is the viewpoint taken by Lindeburg in his proof of the CLT). The invariance principle is now a tool used in Boolean fourier analysis.

Crucial remark: Unfortunately, the equality of these “syntactic forms” is NOT any convergence in distribution. We’ve “blown up” errors by a factor of \sqrt{n} from the convergence guaranteed by the CLT. Dividing both sides by \sqrt{n} normalizes both sides to have finite variance and gives convergence in probability (or the stronger convergence guaranteed by an appropriate CLT).

Asymptotics for S_n

The law of large numbers says that, if \mu is finite, “the sample mean converges to the true mean”, i.e.

S_n / n \to \mu

In our notation, this is a “linear approximation to S_n“,

S_n = n\mu + o(n)

The central limit theorem refines this by saying that, in fact, the sample mean is about \Theta(\frac{1}{\sqrt{n}}) away from the true mean. If \mu and \sigma are finite,

\frac{(S_n - n \mu)}{\sqrt{n}} \to \mathcal{N}(0,\sigma^2)

In our notation, this is a “\sqrt{n} approximation to S_n“,

S_n = n\mu + \sqrt{n} \mathcal{N}(0, \sigma^2) + o(\sqrt{n})

The Berry-Esséen theorem strengthens the convergence to the normal distribution. If \rho = E(|X_1|^3) denotes (an upper bound on) the third moment of the distribution \mathcal{D}, which we assume to be finite,

|S_n - n\mu - \sqrt{n}\mathcal{N}(0,\sigma^2)| \leq O(\frac{\rho}{\sigma^2})

In our notation, this is an improvement up to a constant to the asymptotic distribution of S_n,

S_n = n\mu + \sqrt{n} \mathcal{N}(0, \sigma^2) + O(\frac{\rho}{\sigma^2})

It should be noted that the Berry-Esséen theorem is tight up to the big-Oh. The binomial distribution achieves this rate of convergence; see the end of these lecture notes for a proof.

Invariance Principles

The previous analysis ends with the distribution n \mu + \sqrt{n} \mathcal{N}(0, \sigma^2). We can incorporate everything into one Gaussian to produce the equivalent distribution \mathcal{N}(n\mu, n\sigma^2). Interestingly, because the sum of independent Gaussians is again Gaussian, this distribution has the same PDF as

Y_1 + Y_2 + \cdots + Y_n

for independent Gaussian random variables Y_i \sim \mathcal{N}(\mu, \sigma^2). This leads us to the equality

S_n =X_1 + X_2 + \cdots + X_n =  Y_1 + Y_2 + \cdots + Y_n + O(1)

in which we’ve simply replaced each X_i by a Gaussian random variable with the same mean and variance.

As noted in Remark 29 here, one can improve the constant term by changing the replacement variables. In particular, it can be made o(1) if the first, second, and third moments of the Y_i agree with the X_i.

The idea of “replacement invariance” surfaces in theoretical CS in the context of Boolean fourier analysis. Here we generalize the summation of Boolean (\{\pm1\}-valued) variables X_1 + X_2 + \cdots + X_n to an arbitrary Boolean function

f(X_1, X_2, \dots, X_n)

The invariance principle states that the random variable f: \{\pm 1\} \to \mathbb{R} for X_i uniformly drawn from \{\pm 1\} is close in probability to the random variable

f(Y_1, Y_2, \dots, Y_n)

for Y_i \sim \mathcal{N}(0, 1) (assuming we normalize both sides to have variance 1). In this case the “closeness” is determined by the maximum influence of a variable X_i on the value of f, as well as the complexity of f (its degree as a multilinear polynomial); see the previously linked lecture notes for an exact quantitative statement.

The Ackermann Function Defined By Hyperoperation

One of my associates recently remarked, “I can write down the Ackermann recursion, but I have zero intuition where it comes from”. This expository post explains the answer I gave him, that the Ackermann function arises via hyperoperation.

The Ackermann function A(m, n) is variously defined, but the most popular these days is the Ackermann-Péter function:

A(0, n) = n+1

A(m, 0) = A(m-1, 1)

A(m, n) = A(m - 1, A(m, n-1))

One way to build up to the Ackermann function is through hyperoperation. We’ll show that A(m, n) is pretty much 2 \wedge_m n for the m-th hyperoperator \wedge_m: in fact A(m, n) = 2 \wedge_m (n+3) -3.

Fixing the first argument of A(m, n) yields a function based on the m-th hyperoperator, which is why we often think of the Ackermann function as a family of functions parameterized by the first argument. In retrospect, maybe the better definition for the Ackermann function is the simpler A(m,n) = 2 \wedge_m n, so that the ubiquitous exercise to compute the first few functions x \mapsto A(m, x) becomes just a little bit less of an exercise.

Hyperoperators

Interpret multiplication x * y as repeated addition: add x to itself, y times. Interpret exponentiation x^y as repeated multiplication: multiply x with itself, y times. Going one step further, interpret tetration x \uparrow y as repeated exponentiation: exponentiate x with itself, y times. For example,

x \uparrow 6 = x^{x^{x^{x^{x^{x}}}}}

Going infinitely many steps further, we can define the k-th hyperoperation x \wedge_k y to “recursively apply the previous hyperoperation on x with itself, y times” (this of course stops being commutative at and beyond exponentiation, though the first hyperoperators constructed were actually commutative). To actually implement this we should use the recursion “compute x hyperoperated with itself y-1 times, then apply x one more time”:

x \wedge_k y \overset{\text{def}}{=} x \wedge_{k-1}(x \wedge_k (y-1))

Notice we fold right here i.e. compute x \wedge_{k-1} (x \wedge_{k-1} (x \wedge_{k-1}\cdots (x \wedge_{k-1} x)\cdots)) rather than ((\cdots(x \wedge_{k-1} x) \cdots\wedge_{k-1} x) \wedge_{k-1} x) \wedge_{k-1} x.

We also need a trio of base cases for what happens when you “apply x to itself 0 times” in order to align with the normal arithmetic operations:

x \wedge_1 0 \overset{\text{def}}{=} x

x \wedge_ 2 0 \overset{\text{def}}{=} 0

x \wedge_k 0 \overset{\text{def}}{=} 1 for k \geq 3

Finally, we set x \wedge_0 y\overset{\text{def}}{=} y+1, to capture that addition, \wedge_1, arises as the iterated successor function, our \wedge_0. After making all these definitions, we have our hyperoperators. These extend the normal arithmetic operators in the following way:

x \wedge_1 y = x + y

x \wedge_2 y = x * y

x \wedge_3 y = x^y

x \wedge_4 y = x \uparrow y

Proof of Ackermann/Hyperoperator Identity

The point of defining these hyperoperations is to note that the Ackermann function is itself a specific case: A(m, n) = 2 \wedge_m (n+3) -3. The proof of this isn’t too exciting, but let’s prove it by induction. Check the base case m = 0,

A(0,n) = n+1

2 \wedge_0 (n+3) - 3 = n+1

For the base case n = 0, we need key properties of using 2 instead of a different constant:

2 \wedge_k 1 = 2 for k \geq 3

2 \wedge_k 2 = 4 for k \geq 3

Now we have

A(m, 0) = A(m-1, 1)

2 \wedge_m (0+3) - 3 = 2 \wedge_{m-1} (2 \wedge_m 2) - 3 = 2 \wedge_{m-1} 4 -3

And the primary recurrence for 2 \wedge_m n and A(m, n):

A(m, n) = A(m-1, A(m, n-1))

2 \wedge_m (n+3) - 3 =2 \wedge_{m-1}(2 \wedge_{m} (n+2)) - 3 =2 \wedge_{m-1}((2 \wedge_{m} (n+2) - 3) + 3) - 3

Substituting 2 \wedge_m (n+2) - 3 for A(m, n-1) by induction, we see they are in fact the same recursion. \square

 

Approximating Binomial Coefficients with Entropy

This post recounts the story of one of my favorite “probabilistic method” proofs,

\log \binom{n}{k} \leq n H(k/n)

where H(p) is the binary entropy function i.e. the entropy of a coin flipped with bias p. Maybe more properly the proof is just counting in two ways?

By a tiny tweak to our entropy argument as done here, one can get a better bound

\log\displaystyle\sum_{i = 0}^{k} \binom{n}{k} \leq nH(k / n)

This is what’s typically used in practice to bound e.g. the sum of the first 1/3 of the binomial coefficients by 2^{n H(1/3)}. The proof here gives all the essential ideas for that proof. With Stirling’s approximation used to give error bounds, both approximations are tight when k = \Theta(n),

\binom{n}{k} = (1+o(1))2^{n H(k / n)}

The experiment: Let the random variable X denote the result of flipping n fair coins i.e. X is binomially distributed. Now suppose we know X has exactly k Heads.

The question: How much entropy is in the conditional distribution of X given k Heads? I.e. what is H(X \mid k\text{ Heads})?

The left side: The conditional distribution of X is uniform over \binom{n}{k} possible results. The entropy of the uniform distribution on N objects is \log N, therefore

H(X \mid k\text{ Heads}) = \log \binom{n}{k}.

The right side: On the other hand, X is specified by its bits X_i, so

H(X \mid k \text{ Heads}) = H(X_1, X_2, \dots, X_n \mid k \text{ Heads})

By subaddivitity of entropy,

H(X_1, X_2, \dots, X_n \mid k \text{ Heads}) \leq H(X_1 \mid k \text{ Heads}) + H(X_2 \mid k \text{ Heads}) + \cdots + H(X_n \mid k \text{ Heads})

This expression is easy to compute: when flipping k Heads of n flips, each X_i looks like a coin with bias k /n, therefore its entropy is H(k/n). So

H(X_1 \mid k \text{ Heads}) + H(X_2 \mid k \text{ Heads}) + \cdots + H(X_n \mid k \text{ Heads})= n H(k/n)

giving the right-hand side.

Further Generalization: Multinomial Coefficients

We can naturally generalize the above argument to provide an approximation for multinomial coefficients. Fix natural numbers k_1, k_2, \dots, k_m with \sum_i k_i = n. Let K be a distribution over [m] taking i with probability k_i / n. Then

\log \binom{n}{k_1, k_2, \dots, k_m} \leq nH(K)

The proof generalizes perfectly: the underlying set is [m]^{n} (place a letter from \{1, 2, \dots, m\} at each of n indices), and we consider the entropy of a uniform choice of a string constrained to have the fixed letter histogram k_1, k_2, \dots, k_m. This is exactly the left-hand side, and subadditivity yields the right-hand side.

Moreover, by Stirling again the approximation is tight when k_i = \Theta(n).

A Heuristic Argument for the Effectiveness of the Union Bound

The union bound is ubiquitous in theoretical computer science, and perhaps in justification of this Andy Drucker mentioned in a lecture once a heuristic argument for its “tightness”. Just to write it out, the union bound takes

P(A \cup B) = P(A) + P(B) - P(A \cap B) \qquad\qquad P\left(\bigcup_i A_i\right)

and approximates it as

P(A \cup B) \leq P(A) +P(B)\qquad \qquad P\left(\bigcup_i A_i\right) \leq \displaystyle\sum_i P(A_i)

Imagine if \{A_i \mid 1 \leq i \leq n\} are “bad events” that occur with some tiny probability \delta. The union bound says

P(\text{a bad event happens}) \leq n \delta

If the events were fully independent, we would have the equality

P(\text{a bad event happens}) = 1 - (1 - \delta)^n

Now (1 - \delta)^n \approx 1 - n\delta when \delta = o(n). So we have

P(\text{a bad event happens}) \approx 1 - (1 - n\delta) = n\delta

This is equal to the bound spit out by the application of the union bound! In the case of fully independent events with small probabilities, we see the union bound is essentially tight. Full independence is rare, but in applications where low-probability events are slightly correlated, the union bound is also a good approximation.

Wreath Products: Sum and Product Actions

During Laci Babai’s course on graph isomorphism, one of the tools we encountered is the wreath product. Here I’ll give some intuitive descriptions of the wreath product L \wr M.

To make sure we’re all on the same page, the wreath product L \wr M for an arbitrary group L and a permutation group M \leq S_k is the semidirect product (L^k) \rtimes M, where each M is an automorphism of L^k by permuting the coordinates.

That is, we fix some number k and some group of permutations on [k], M \leq S_k. The group we construct takes k disjoint and independent versions of L, L^k, but also allows you to make “limited rearrangements according to M“. There are two distinct ideas going on here; it’s the semidirect product that, if you tell it how two groups should interact, joins those two ideas into a single group.

L\wr S_k is a Graph Automorphism Group

There are two “trivial” wreath products, L \wr 1 (where 1 denotes the trivial group) and L \wr S_k. L \wr 1 is k disjoint and independent versions of L, with no interdependencies introduced by M; this is just L^k.

Let G be a graph with automorphism group L. Claim: if you stick k disjoint copies of G next to each other, this new graph has automorphism group L \wr S_k. To see this, of course some automorphisms apply L elements “in parallel” to the different copies of G. These automorphisms correspond to L^k. But any rearrangement of the copies of G is also an automorphism. All copies are isomorphic, so rearranging can be done without restriction by S_k.

Product Actions of Wreath Products

The previous paragraph shows that, if L acts on a set \Omega, the wreath product L \wr K acts naturally on k\cdot \Omega. This is called the sum action (or “imprimitive action”, because it’s nearly always an imprimitive group action)  of the wreath product. There is another natural action, called the product action (or “primitive action”). Here the domain is the set \Omega^k, and the action is defined by (1) applying elements of L^k coordinate-wise and (2) permuting the coordinates with M.

I think of this like a “simultaneous tracking” version of the sum action. In the sum action, we kept track of one point/vertex across k copies of our graphs, and looked at where that point went when hit with a group element. In the product action, we track one point from each of the k graphs simultaneously, and look at how as a whole those elements move.

H \wr K contains all extensions of H by K

It would be remiss to mention wreath products without mentioning the Kaluzhnin-Krasner theorem (English version). We say that the group G is an extension of H by K if there is an exact sequence

1 \rightarrow K \rightarrow G \rightarrow H \rightarrow 1

Theorem (Kaluzhnin-Krasner, 1951) If G is an extension of H by K, then G is isomorphic to a subgroup of H \wr K.

That is, H \wr K is a group that’s big enough to contain all extensions of the two groups. In a sense that can probably be made more formal through order theory, H \wr K is an “upper bound” on joining H and K together. Not bad!

How I Remember the Chernoff Bound

The Chernoff bound is of critical importance in computer science and probability. When I need it, though, I almost always have to look it up to make sure I get all the parameters correct. Not any more, though. I’ll explain how the Chernoff bound is essentially the PDF of the “Gaussian you would expect” from the Central Limit Theorem.

The most common form of the Chernoff bound is the following: suppose you have n independent and identically distributed coin flips X_1, X_2, \dots, X_n, say the result of repeatedly flipping a coin that comes up Heads with probability p. The number of Heads is by definition a Binomial distribution; let S denote this random variable. Then with all but exponentially small probability, S is within O(\sqrt{n}) of its mean:

\text{Pr}[\left|S - np\right| > \delta \sqrt{n}] \leq 2\exp(-\delta^2 / 2p(1-p))

The Central Limit Theorem (CLT) tells us that as n \to \infty, the distribution (S - \mathbb{E}(S)) / \sqrt{\text{Var}(S)} approaches a standard normal distribution. Philosophically, S approaches a normal distribution with the parameters you would expect: mean \mathbb{E}(S) and variance \text{Var}(S) (though the “scaled up” distributions may not necessarily converge from the CLT alone).

In our case, \mathbb{E}(S) = np and \text{Var}(S) = np(1-p). The PDF of the “Gaussian you would expect” is

p(S = x) \approx C \cdot \exp(-(x - np)^2 / 2np(1-p))

for an appropriate normalizing constant C = 1/\sqrt{2\pi n p(1-p)}. Let’s rewrite that with x = np + \delta\sqrt{n}:

p(S = np + \delta\sqrt{n}) \approx C \cdot \exp(-(\delta \sqrt{n})^2 / 2np(1-p)) =C \cdot \exp(-\delta^2/2p(1-p))

This is (up to the factor of C) the expression appearing in the Chernoff bound! Thus we can think of the Chernoff bound as expressing an “even when n is small” version of the CLT, with a little bit of loss from 1/\sqrt{2\pi} to 2 in the multiplicative factor.

Using the CDF instead of the PDF

We lost some credibility at one point in the technique above: we actually should have been looking at the cumulative probability

\text{Pr}[\left|S - np \right| \geq \delta \sqrt{n}]

Instead we noticed that the Chernoff bound can be remembered by looking at the PDF (and ignoring a nonconstant factor)

p(S = np+\delta\sqrt{n})

But using the CDF instead of the PDF actually gives the same expression, and with a truly constant factor. Let’s compute:

\text{Pr}[\left|S - np \right| \geq \delta \sqrt{n}] = 2\int_{np+\delta\sqrt{n}}^\infty p(S = x) d x

= 2\int_{\delta\sqrt{n}}^\infty 1/ \sqrt{2\pi np(1-p)} \cdot \exp(-x^2 / 2np(1-p)) d x

If we break up the integral into little \sqrt{n} size pieces from \delta \sqrt{n} to \infty, the integral on a piece [k\sqrt{n}, (k+1)\sqrt{n} looks like

\int_{k\sqrt{n}}^{(k+1)\sqrt{n}} 1/ \sqrt{2\pi np(1-p)} \cdot \exp(-x^2 / 2np(1-p)) d x

\leq1/ \sqrt{2\pi np(1-p)} \int_{k\sqrt{n}}^{(k+1)\sqrt{n}} \exp(-k^2 / 2p(1-p)) d x

= 1/ \sqrt{2\pi p(1-p)} \exp(-k^2 / 2p(1-p))

The exponent increases faster than linearly,  and it is the only thing that changes in k, so this contribution to the integral goes to 0 faster than geometrically.  Therefore the integral is bounded by an infinite geometric series with initial term proportional to \exp( -\delta^2/2p(1-p)). Therefore the cumulative probability is (up to a constant) the expression in the Chernoff bound

\exp(-\delta^2/2p(1-p))

and this time the constant is actually constant!