Lecture 7: Arora Rao Vazirani Lecture Outline Part I: Semidefinite - - PowerPoint PPT Presentation
Lecture 7: Arora Rao Vazirani Lecture Outline Part I: Semidefinite - - PowerPoint PPT Presentation
Lecture 7: Arora Rao Vazirani Lecture Outline Part I: Semidefinite Programming Relaxation for Sparsest Cut Part II: Combining Approaches Part III: Arora-Rao-Vazirani Analysis Overview Part IV: Analyzing Matchings of Close Points
Lecture Outline
- Part I: Semidefinite Programming Relaxation for
Sparsest Cut
- Part II: Combining Approaches
- Part III: Arora-Rao-Vazirani Analysis Overview
- Part IV: Analyzing Matchings of Close Points
- Part V: Reduction to the Well-Separated Case
- Part VI: Open Problems
Part I: Semidefinite Programming Relaxation for Sparsest Cut
- Reformulation: Want to minimize
ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป) ๐ฆ๐ โ ๐ฆ๐
2 over all cut
pseudo-metrics normalized so that ฯ๐,๐:๐<๐ ๐ฆ๐ โ ๐ฆ๐
2 = 1
- More precisely, take ๐2 ๐, ๐ = ๐ฆ๐ โ ๐ฆ๐
2 and
minimize ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป) ๐2(๐, ๐) subject to:
1. โ๐: โ๐, xi โ {โ๐, +๐} 2. ฯ๐,๐:๐<๐ ๐2(๐, ๐) = 1
Problem Reformulation
- Reformulation: Minimize
ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป)(๐ฆ๐
2 โ 2๐ฆ๐๐ฆ๐ + ๐ฆ๐ 2) subject to:
1. โ๐: โ๐, xi โ {โ๐, +๐} 2. ฯ๐,๐:๐<๐ (๐ฆ๐
2 โ 2๐ฆ๐๐ฆ๐ + ๐ฆ๐ 2) = 1
- Relaxation: Minimize
ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป)(๐๐๐โ2๐๐๐ + ๐
๐๐) subject to:
1. โ๐, ๐, ๐๐๐ = ๐
๐๐
2. ฯ๐,๐:๐<๐ (๐๐๐โ2๐๐๐ + ๐
๐๐) = 1
3. ๐ โฝ 0
Problem Relaxation
- Consider the cycle of length ๐. The semidefinite
program can place the cycle on the unit circle and assign each ๐ฆ๐ the corresponding vector ๐ค๐.
Bad Example: The Cycle
4 3 1 5 2 ๐ค1 ๐ค2 ๐ค3 ๐ค4 ๐ค5
G
- ฯ๐,๐:๐<๐(๐2(๐, ๐)) = ฮ(๐2)
- ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป)(๐2(๐, ๐)) = ฮ(๐ โ 1/๐2)
- Gives sparsity ฮ(1/๐3), true value is ฮ(1/n2)
- Gap is ฮฉ(๐), which is horrible!
Bad Example: The Cycle
4 3 1 5 2 ๐ค1 ๐ค2 ๐ค3 ๐ค4 ๐ค5
G
Part II: Combining Approaches
- Why did the semidefinite program do so much
worse than the linear program?
- Missing: Triangle inequalities
๐2 ๐, ๐ โค ๐2(๐, ๐) + ๐2(๐, ๐)
- What happens if we add the triangle inequalities
to the semidefinite program?
Adding the Triangle Inequalities
- Let ฮ be the angle between ๐ค๐ โ ๐ค๐ and ๐ค๐ โ ๐ค๐
- ๐ค๐ โ ๐ค๐
2 =
๐ค๐ โ ๐ค๐
2 + ๐ค๐ โ ๐ค๐ 2 if ฮ = ๐ 2
- ๐ค๐ โ ๐ค๐
2 >
๐ค๐ โ ๐ค๐
2 + ๐ค๐ โ ๐ค๐ 2 if ฮ > ๐ 2
- ๐ค๐ โ ๐ค๐
2 <
๐ค๐ โ ๐ค๐
2 + ๐ค๐ โ ๐ค๐ 2 if ฮ < ๐ 2
- Triangle inequalities โฌ no obtuse angles
Geometric Picture
๐ค๐ ๐ค๐ ๐ค๐ ฮ
- Putting ๐ > 4 vectors in a circle violates triangle
inequality, so the semidefinite program no longer behaves badly on the cycle. In fact, it gets very close to the right answer.
Fixing Cycle Example
๐ค๐ ๐ค๐ ๐ค๐
ฮ
Goemans-Linial Relaxation
- Semidefinite program (proposed by Goemans
and Lineal): Minimize ฯ๐,๐:๐<๐: ๐,๐ โ๐น(๐ป)(๐๐๐ โ 2๐๐๐ + ๐
๐๐) subject to:
1.
โ๐, ๐, M๐๐ = ๐
๐๐
2. โ๐, ๐, ๐, ๐2(๐, ๐) โค ๐2(๐, ๐) + ๐2(๐, ๐) where ๐2(๐, ๐) = ๐๐๐ โ 2๐๐๐ + ๐
๐๐
3. ฯ๐,๐:๐<๐ ๐๐๐ โ 2๐๐๐ + ๐
๐๐ = 1
4. ๐ โฝ 0
Arora-Rao-Vazirani Theorem
- Theorem [ARV]: The Goemans-Linial
relaxation for sparsest cut gives an ๐ ๐๐๐๐ -approximation and has a polynomial time rounding algorithm.
- Also called metrics of negative type
- Definition: A metric is an ๐2
2 metric if it is possible
to assign a vector ๐ค๐ฆ to every point ๐ฆ such that ๐ ๐ฆ, ๐ง = ๐ค๐ง โ ๐ค๐ฆ
2.
- Last time: General metrics can be embedded into
๐1 with ๐ log ๐ distortion.
- Theorem [ALN08]: Any ๐2
2 metric embeds into ๐1
with ๐ ๐๐๐๐(๐๐๐๐๐๐๐) distortion.
- [ARV] analyzes the algorithm more directly
๐2
2 Metric Spaces
- Degree 4 SOS captures the triangle inequality: if
๐ฆ๐
2 = ๐ฆ๐ 2 = ๐ฆ๐ 2 then
๐ฆ๐
2 ๐ฆ๐ โ ๐ฆ๐ 2 โค ๐ฆ๐ 2 ๐ฆ๐ โ ๐ฆ๐ 2 + ๐ฆ๐ 2 ๐ฆ๐ โ ๐ฆ๐ 2
โฌ2๐ฆ๐
2 ๐ฆ๐ 2 โ ๐ฆ๐๐ฆ๐ โค 2๐ฆ๐ 2(2๐ฆ๐ 2 โ ๐ฆ๐๐ฆ๐ โ ๐ฆ๐๐ฆ๐)
- Proof:
๐ฆ๐ โ ๐ฆ๐
2 ๐ฆ๐ โ ๐ฆ๐ 2 = 4(๐ฆ๐ 2 โ ๐ฆ๐๐ฆ๐)(๐ฆ๐ 2 โ ๐ฆ๐๐ฆ๐)
= 4๐ฆ๐
2 ๐ฆ๐ 2 โ ๐ฆ๐๐ฆ๐ โ ๐ฆ๐๐ฆ๐ + ๐ฆ๐๐ฆ๐ โฅ 0
- Thus, degree 4 SOS captures the Goemans-Linial
relaxation
Goemans-Linial Relaxation and SOS
Part III: Arora-Rao-Vazirani Analysis Overview
- Semidefinite program gives us one vector ๐ค๐ for
each vertex ๐.
- We first consider the case when these vectors
are spread out.
- Definition: We say that a set of ๐ vectors {๐ค๐} is
well-spread if it can be scaled so that:
1. โ๐, ๐ค๐ โค 1 2.
1 ๐2 ฯ๐<๐ ๐๐๐ 2 is ฮฉ(1) (the average squared distance
between vectors is constant)
- We will assume we are using this scaling.
Well-Spread Case
- Theorem: Given a set of ๐ vectors {๐ค๐} which
are well-spread and obey the triangle inequality, there exist well-separated subsets ๐ and ๐ of these vectors of linear size. In other words, there exist ๐, ๐ such that:
1. ๐ and ๐ are ฮ far apart (i.e. โ๐ค๐ โ ๐, ๐ค๐ โ ๐, ๐๐๐
2 โฅ ฮ) where ฮ is ฮฉ 1 ๐๐๐๐
2. |๐| and |๐| are both ฮฉ(๐)
Structure Theorem
- Idea: If we have well-separated subsets ๐, ๐,
take a random cut of the form (๐๐ , าง ๐๐ ) where ๐๐ = {๐: ๐2 ๐ค๐, ๐ = min
๐:๐ค๐โ๐ ๐๐๐ 2 โค ๐ } and ๐ โ 0, ฮ
- All ๐, ๐ โ ๐น(๐ป) contribute at most
๐๐๐
2
ฮ to the
expected number of edges cut and ๐๐๐
2 to
ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป) ๐๐๐
2 (the number of edges the
SDP โthinksโ are cut)
Finding a Sparse Cut
- Since ๐, ๐ have size ฮฉ(๐) and are always on
- pposite sides of the cut, we always have that
๐๐ โ | าง ๐๐ | is ฮ ๐2 . This matches ฯ๐,๐:๐<๐ ๐๐๐
2 up
to a constant factor. (this is why we need ๐ and ๐ to have linear size!)
- Thus, the expected ratio of the sparsity to the
SDP value is at most
1 ฮ = O
๐๐๐๐ , as needed.
Finding a Sparse Cut Continued
- Take the hypercube โ
1 log2 ๐ , 1 log2 ๐ log2 ๐
- X = ๐ฆ: ฯ๐ ๐ฆ๐ โค โ1 and Y = ๐ง: ฯ๐ ๐ฆ๐ โฅ 1
have the following properties:
1. ๐ and ๐ have linear size 2. โ๐ฆ โ ๐, ๐ง โ ๐, ๐ฆ, ๐ง differ in โฅ 2 log2 ๐
- coordinates. Thus, ๐2 ๐ฆ, ๐ง โฅ
2 log2 ๐ log2 ๐ = 2 log2 ๐
Tight Example: Hypercube
- Let ๐ be the dimension such that โ๐, ๐ค๐ โ โ๐.
- Algorithm (Parameters ๐ > 0, ฮ, ๐)
- 1. Choose a random ๐ฃ โ โ๐.
- 2. Find a value ๐ such that there are ฮฉ(๐) vectors ๐ค๐
with ๐ค๐ โ ๐ฃ โค ๐ and ฮฉ(๐) vectors ๐ค๐ with ๐ค๐ โ ๐ฃ โฅ ๐ +
๐ ๐. Let ๐โฒ and ๐โฒ be these two sets of vectors
- 3. As long as there is a pair ๐ฆ โ ๐โฒ, ๐ง โ ๐โฒ such that
๐ ๐ฆ, ๐ง < ฮ, delete ๐ฆ from ๐โฒ and ๐ง from ๐โฒ. The resulting sets will be the desired ๐, ๐.
- Need to show: P[๐, ๐ have size ฮฉ(๐)] is ฮฉ(1)
Finding Well-Separated Sets
- Will first explain why step 1,2 succeed with
probability 2๐ > 0.
- Will then show that the probability step 3
deletes a linear number of points is โค ๐
- Together, this implies that the entire algorithm
succeeds with probability at least ๐ > 0.
Finding Well-Separated Sets
- What happens if we project a vector ๐ค of length
๐ in a random direction in โ๐?
- Without loss of generality, assume ๐ค = ๐1
- To pick a random unit vector in โ๐, choose
each coordinate according to ๐ 0,
1 ๐ (the
normal distribution with mean 0 and standard deviation
1 ๐), then rescale.
- If ๐ is not too small, w.h.p. very little rescaling
will be needed.
Behavior of Gaussian Projections
- What happens if we project a vector of length ๐
in a random direction in โ๐?
- Resulting value has a distribution which is โ
normal distribution of mean 0, standard deviation
1 ๐ (difference comes from the
rescaling step)
Behavior of Gaussian Projections
- If we take a random ๐ฃ โ โ๐, with probability
ฮฉ(1), ฯ๐<๐ (๐ค๐ โ ๐ค๐) โ ๐ฃ is ฮฉ
๐2 ๐
- Note: this can fail with non-negligible
probability, consider the case when โ๐, ๐ค๐ = ยฑ๐ค. If ๐ฃ is orthogonal to ๐ค then everything is projected to 0.
- For arbitrarily small ๐ > 0, with very high
probability, |๐ค๐ โ ๐ฃ| is ๐
1 ๐ for 1 โ ๐ ๐ of the
๐ โ [1, ๐]
Success of Steps 1,2
- Together, these facts imply that if we choose a
random unit vector ๐ฃ, with probability ฮฉ(1), there exist ๐โฒ, ๐โฒ, ๐1, ๐2 such that
1. ๐โฒ, ๐โฒ have size ฮฉ(๐) 2. โ๐ฆ โ ๐โฒ, ๐ฃ โ ๐ฆ โค ๐1 3. โ๐ง โ ๐โฒ, ๐ฃ โ ๐ง โฅ ๐2 4. ๐2 โ ๐1 is ฮฉ(1)
Success of Steps 1,2
- We need to show that the probability step 3
eliminates
๐๐๐{ ๐ ,|๐|} 2
pairs of points is at most ๐
- We also need to show how the general case can
be reduced to the well-spread case.
Remaining Steps
Part IV: Analyzing Matchings of Close Points
Matching Covers
- If part 3 of the algorithm causes it to fail with
probability ๐, then for ๐ fraction of the directions ๐ฃ there is a matching ๐๐ฃ of points of size ๐โฒ๐ such that for each pair (๐ค๐, ๐ค๐) in the matching:
1. d2 ๐ค๐, ๐ค๐ โค ฮ 2. ๐ค๐ โ ๐ค๐ โ ๐ฃ โฅ
2๐ ๐
where ๐, ๐โฒ, ๐ > 0 are constants
- Note: Corresponds to Definition 4 in [ARV]
- Define the matching graph ๐ to be ๐ =โช๐ฃ ๐๐ฃ
- Assume that ๐ ๐ค๐, ๐ค๐ โค
ฮ for some ๐ค๐, ๐ค๐
- P
๐ค๐ โ ๐ค๐ โ ๐ฃ โฅ 2๐
๐ โผ ๐ โ
4๐2 ๐2(๐ค๐,๐ค๐) โค ๐โ4๐2 ฮ
- If ฮ is a sufficiently small constant times
1 ๐๐๐๐ ,
with high probability there are no pairs of close points at all between ๐โฒ and ๐โฒ!
Analyzing ฮ = ฮฉ
1 ๐๐๐๐
- When the algorithm fails in step 3, this gives us
pairs of points (๐ค๐, ๐ค๐) which are edges of the matching graph ๐, implying that ๐2 ๐ค๐, ๐ค๐ โค ฮ and ๐ค๐ โ ๐ค๐ โ ๐ฃ โฅ
2๐ ๐
- We will use this to find pairs of points (๐ค๐, ๐ค๐)
which are ๐ steps apart in the matching graph where ๐ค๐ โ ๐ค๐ โ ๐ฃ โฅ
๐๐ ๐
Key Idea for Larger ฮ
- We will find pairs of points (๐ค๐, ๐ค๐) which are ๐
steps apart in the matching graph where ๐ค๐ โ ๐ค๐ โ ๐ฃ โฅ ๐๐
๐
- Using triangle inequality, ๐2 ๐ค๐, ๐ค๐ โค ๐ฮ
- P
๐ค๐ โ ๐ค๐ โ ๐ฃ โฅ ๐๐
๐ โผ ๐ โ
๐2๐2 ๐2(๐ค๐,๐ค๐) โค ๐โ๐๐2 ฮ
- For ฮ = ฮฉ
1 log ๐ , if we can apply this with ๐ =
ฮฉ log ๐ , we again obtain a contradiction.
Key Idea for Larger ฮ Continued
- Lemma: If a graph ๐ป has average degree ๐, we
can find a non-empty subgraph of ๐ป which has minimal degree
๐ 4.
- Proof: Iteratively delete vertices which have
degree โค ๐
- 4. The total number of edges deleted
is at most ๐๐
4 . However, 2|๐น ๐ป | โฅ ๐๐, so there
must be โฅ ๐๐
4 edges remaining.
Average Degree to Minimal Degree
- Average probability that a vertex is matched is
at least ๐โฒ๐
- Can apply a similar idea and delete any vertex
which is matched with probability โค ๐โฒ๐
4
- By similar logic, at least half the edges are
preserved.
- This implies that there are at least ๐โฒ๐ vertices
remaining (otherwise more than half of every matching of โฅ ๐โฒ๐ edges is deleted)
- Note: Corresponds to Lemma 4 of [ARV09]
Minimal Probability Guarantee
- Corollary: There is a set of vertices ๐ of size โฅ
๐โฒ๐ such that โ๐ฆ โ ๐, ๐ ๐ฆ is matched with an ๐ฆโฒ โ ๐ โฅ ๐โฒ where ๐โฒ = ๐โฒ๐
4
Minimal Probability Guarantee
- How can we find pairs of points whose
projected distance is larger and larger by taking steps in the matching graph?
- Letโs assume we have a very convenient
inductive setup.
Building Up Projection Distances
- Have a set of points ๐ of size โฅ ๐โฒ๐
โ๐ฆ โ ๐, ๐ ๐ฆ is matched with an ๐ฆโฒ โ ๐ โฅ ๐โฒ
- Inductive setup: Assume we also have a subset
๐ โ ๐ of points of size ๐|๐| such that
โ๐จ โ ๐, ๐ โ๐จโฒ โ ๐: ๐๐ ๐จ, ๐จโฒ โค ๐, ๐จ โ ๐จโฒ โ ๐ฃ โฅ ๐๐ ๐ โฅ 1 โ ๐โฒ 4
where ๐๐(๐จ, ๐จโฒ) is the number of steps required to reach ๐จโฒ from ๐จ in the matching graph
- Note: This corresponds to Definitions 6,8 of
[ARV]
Setup
- ๐ is a set of points where every ๐ฆ โ ๐ is
matched to another ๐ฆโฒ โ ๐ for โฅ ๐โฒ fraction of the directions
- Have a subset ๐ โ ๐ of size โฅ ๐|๐| where each
๐จ โ ๐ is โcoveredโ in โฅ 1 โ
๐โฒ 4 fraction of the
directions by points which are โค ๐ steps away in the matching graph whose projected distance is โฅ
๐๐ ๐
Setup Rephrased
Composition Step
๐ฃ ๐จ ๐จโฒ ๐ฆโฒ
- r
๐ฆโฒ ๐จ ๐จโฒ
- Given a direction ๐ฃ, for each point ๐จ โ ๐:
- 1. Check if ๐จ is matched in ๐๐ฃ = ๐โ๐ฃ
- 2. If so, let ๐ฆโฒ be the point ๐จ is matched with.
๐จ โ ๐ฆโฒ โ ๐ฃ โฅ
2๐ ๐
- 3. If ๐จ โ ๐ฆโฒ โ ๐ฃ > 0, check if ๐จ is covered in direction
๐ฃ. If ๐จ โ ๐ฆโฒ โ ๐ฃ < 0 check if ๐จ is covered in direction โ๐ฃ. With probability โฅ 1 โ
๐โฒ 2 , ๐จ is
covered in both directions. Let ๐จโฒ = covering point.
- 4. Observe that ๐จโฒ โ ๐ฆโฒ โ ๐ฃ โฅ
๐๐+2๐ ๐
and ๐๐ ๐ฆโฒ, ๐จโฒ โค ๐ + 1
Composition Step
- Have that the density of the new covering edges
is at least
๐๐โฒ 2 .
- Following the same kind of logic we used to go
from average to minimal degree, can find a subset ๐โฒ โ ๐ of size โฅ
๐๐โฒ 8 |๐| where every
vertex ๐จโฒ โ ๐โฒ is covered in โฅ ๐๐โฒ
8 of the
directions.
- Note: Corresponds to Lemma 11 of [ARV]
Composition Step
- How can we recover the inductive hypothesis?
- Can boost the covering probability to almost 1
with a small loss in the projection length!
- Corollary 12 of [ARV] rephrased: If the covering
vectors have length at most
ฯ 16 log
16 ฯฮดโฒ +8 log 8 ฮดโฒ
then if z is covered with probability
๐๐โฒ 8 with projection
length
๐๐+2๐ ๐ , it is covered with probability 1 โ
๐โฒ/4 with projection length (๐+1)๐
๐
Boosting Lemma
- If we apply this directly:
โ ๐ โผ ๐โฒ โ๐ โ Need covering vectors to have length ๐
1 ๐๐๐๐
= ๐
1 ๐
โ Guaranteed to have length โค ๐ฮ โ We can take ๐ = ฮฉ(ฮโ1
2). We want
๐ ฮ to be a large
constant times log ๐ , which means we can take ฮ = ฮฉ( ๐๐๐๐ โ2/3)
Bound on ๐ and ฮ
- To reach ๐ = ฮฉ
๐๐๐๐ , a more careful argument is needed, see [ARV].
- Note: We should not expect ๐ to be any higher
than O ๐๐๐๐ . Recalling that the projection length with ๐ steps is
๐๐ ๐, if ๐ = ฮ(๐๐๐๐)
(matching the hypercube example) and ๐ is ๐ ๐๐๐๐ then this is ๐(1), which is too large!
Reaching ๐ = ฮฉ ๐๐๐๐
Part V: Reduction to the Well- Separated Case
- Take the scaling where ฯ๐,๐:๐<๐ ๐2(๐, ๐) =
๐ 2
(i.e. the average squared distance between pairs
- f points is 1)
- One of the following two cases holds:
- 1. There exists a point ๐ฆ0 such that
๐ 10 other points
are within squared distance
1 10 of ๐ฆ0
- 2. For all points ๐ฆ, less than
๐ 10 other points are within
squared distance
1 10 of ๐ฆ
Two Cases
- Assume there exists a point ๐ฆ0 such that
๐ 10 other
points are within squared distance
1 10 of ๐ฆ0
- Let ๐ = {x: d2 x, x0 โค
1 10}
- Key idea: Take the Frรฉchet embedding with
respect to ๐!
- In particular, take
๐๐ ๐ง, ๐จ = |๐2 ๐ง, ๐ โ ๐2 ๐จ, ๐ |
Case #1
- We will show that
ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป) ๐๐ ๐,๐ ฯ๐,๐:๐<๐ ๐๐ ๐,๐
is ๐
ฯ๐,๐:๐<๐, ๐,๐ โ๐น ๐ป ๐2 ๐,๐ ฯ๐,๐:๐<๐ ๐2 ๐,๐
- ๐๐ is an ๐1 metric, so this gives an ๐(1)-
approximation!
- First note that ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป) ๐๐ ๐, ๐ is less
than or equal to ฯ๐,๐:๐<๐, ๐,๐ โ๐น(๐ป) ๐2 ๐, ๐
- We just need to show that ฯ๐,๐:๐<๐ ๐๐ ๐, ๐ is
ฮฉ(๐2)
Case #1 Continued
- Proposition: The average squared distance of
points outside of ๐ from ๐ is at least
1 5
- Proof: If this were not the case then the average
squared distance between points would be < 1 as for all ๐ง, z, ๐2 ๐ง, ๐จ โค ๐2 ๐ง, ๐ + ๐2 ๐จ, ๐ + 1 5
- Corollary: ฯ๐,๐:๐<๐ ๐๐ (๐, ๐) is ฮ ๐2 . To show
this, it is sufficient to consider the pairs where exactly one of ๐, ๐ are in ๐.
Case #1 Continued
- Assume that for all points ๐ฆ, there are fewer than
๐ 10 other points which are within squared
distance 1
10 of ๐ฆ
- Proposition: There is a point ๐ฆ0 such that at least
๐ 2 other points are within distance 2 of ๐ฆ0
- Proof: If this were not the case then the average
distance between points would be > 1.
- Let ๐ be the set of points within distance 2 of ๐ฆ0.
Case #2
- Key idea: Subtract ๐ฆ0 from all vectors!
- After this translation:
โ All points in ๐ have length โค 2 โ For all points ๐ฆ โ ๐, there are at least
n 2 โ n 10 = 2๐ 5
points in ๐ which have squared distance more than
1 10 from ๐ฆ. Thus, the average squared distance
between points in ๐ is ฮฉ(1)
- Restricting to ๐ and scaling down by a factor of
2, we are now in the well-spread case
Case #2 Continued
Part VI: Open Problems
- Lower Bounds have been shown for this
semidefinite program
- Khot and Vishnoi [KV05] proved the first super-
constant lower bound.
- For weighted graphs, Naor and Young [NY17]
showed an ฮฉ ๐๐๐๐ lower bound (which is tight up to a ๐๐๐๐๐๐๐ factor).
- However, these lower bounds donโt apply even
to degree 4 SOS!
Lower Bounds
- Is this also true for unweighted graphs?
- Does degree 4 SOS or higher degree SOS give
further improvements? Can we show a superconstant lower bound for a constant number of rounds of SOS?
Open Questions
References
- [ALN08] S. Arora, J. R. Lee, A. Naor. Euclidean distortion and the sparsest cut. J.
- Amer. Math. Soc. 21 (1), p. 1โ21. 2008
- [ARV] S. Arora, S. Rao, U. Vazirani. Expander Flows, Geometric Embeddings and
Graph Partitioning. https://www.cs.princeton.edu/~arora/pubs/arvfull.pdf
- [KV05] S. Khot and N. Vishnoi. The unique games conjecture, integrality gap for cut
problems and embeddability of negative type metrics into ๐1. FOCS 2005
- [NY17] A. Naor and R. Young. The integrality gap of the Goemans--Linial SDP
relaxation for Sparsest Cut is at least a constant multiple of ๐๐๐๐. STOC 2017