Johnson Lindenstrauss Lemma


Theorem 1 Given any {n} vectors {(x_1,\ldots,x_n)\in {\mathbb R}^d} and a parameter {\epsilon >0}, there exists a mapping {\Phi:{\mathbb R}^d \mapsto {\mathbb R}^k}, where {k=O(\log n/\epsilon^2)}, such that for any pair of vectors {x_i,x_j}, we have

\displaystyle (1-\epsilon)||x_i - x_j||_2 \leq ||\Phi(x_i) - \Phi(x_j)||_2 \leq (1+\epsilon)||x_i - x_j||_2

The lemma is quite fascinating; it states that if one is interested only in pairwise distances then one can reduce the dimension by possibly an exponential factor. This, for instance, has huge consequences in nearest neighbor methods used in data mining and machine learning.

Today, one can think of the proof of the above lemma as an advanced exercise in a probability class after one has proved the Chernoff-Hoeffding bounds. The crux of the proof lies in the following lemma. Let {A} be a {k\times d} matrix where each entry {A_{ij} \sim N(0,1)} is a normal random variable with mean {0} and variance {1}. Let {u} be any vector in {{\mathbb R}^d}, and let {v = Au/\sqrt{k}}.

Lemma 2 {~~\Pr[||v||_2 \notin (1\pm\epsilon)||u||_2] \leq \exp(-C\epsilon^2k).}

The lemma implies that if we pick {k = \Theta(\log n/\epsilon^2)}, then for any set of {n} vectors {(x_1,\ldots,x_n)\in {\mathbb R}^d}, their corresponding images {y_i=Ax_i/\sqrt{k}} satisfy w.h.p. the property that any pairwise distance is not distorted by much. This gives the mapping {\Phi} in the JL theorem.

Proof: For simplicity, let’s assume {u} is a unit vector, and let {w = Au}. Observe that each coordinate {w_i \sim N(0,1)} since it’s a linear combination of Gaussians whose squared sum is {1}. Therefore, the squared length of {w}

\displaystyle ||w||^2_2 = \sum_{i=1}^k w^2_i \sim \chi^2_k

has the chi-squared distribution with {k} degrees of freedom. At this point one could open a reference to get {\Pr[\big|||w||^2_2 - k\big| \geq \epsilon k] \leq \exp(-C\epsilon^2 k)}. \Box

Therefore, the JL lemma boils down to a tail bound on the chi-squared distribution. This can be proven either directly (that is, via an integration since the cdf of the chi-squared distribution is exactly known), or similar to how the standard Chernoff bound is proven. Let us do one direction

\displaystyle  \begin{array}{rcl}  \Pr[||w||^2_2 \geq (1+\epsilon)k] & = & \Pr[\exp(t\sum_i w^2_i) \geq \exp(tk(1+\epsilon))] \\ 								& \leq & \mathop\mathbf{Exp}[\exp(t\sum_i w^2_i)]\exp(-tk(1+\epsilon)) ~~~ \textrm{ (Markov) }\\ 								& \leq & \left(\prod_{i=1}^k \mathop\mathbf{Exp}[\exp(tw_i^2)] \right)\exp(-tk(1+\epsilon)) ~~~ \textrm{ (Independence) } \end{array}

Given a normal random variable {X\sim N(0,1)}, and a parameter {-\infty < t < 1/2}, one can do an integration exercise to get {\mathop\mathbf{Exp}[e^{tX^2}] = \frac{1}{\sqrt{1-2t}}}. Putting this in, we get for any {-\infty < t < 1/2},

\displaystyle \Pr[||w||^2_2 \geq (1+\epsilon)k] \leq (1-2t)^{-k/2} \exp(-tk(1+\epsilon))

Setting {t = \frac{\epsilon}{2(1+\epsilon)}}, we get

\displaystyle \Pr[||w||^2_2 \geq (1+\epsilon)k] \leq (1+\epsilon)^{k/2}e^{-k\epsilon/2} = e^{-k/2(\epsilon - \ln(1+\epsilon))} \leq e^{-Ck\epsilon^2}

Let me end by referring to a very nice paper by Jiri Matousek who shows how one can get the above theorem when the {A_{ij}}‘s are random variables who have a sub-Gaussian tail.

Advertisements
This entry was posted in Learn. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s