Communication Complexity with Small Advantage Thomas Watson - - PowerPoint PPT Presentation
Communication Complexity with Small Advantage Thomas Watson - - PowerPoint PPT Presentation
Communication Complexity with Small Advantage Thomas Watson University of Memphis Communication complexity F : { 0 , 1 } n { 0 , 1 } n { 0 , 1 } Communication complexity F : { 0 , 1 } n { 0 , 1 } n { 0 , 1 } (Alice: x ) (Bob: y )
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1}
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) (Bob: y)
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y)
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − −
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − →
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − −
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − →
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y)
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:
◮ Correctness:
∀(x, y) : P[output is F(x, y)] ≥ 3/4
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:
◮ Correctness:
∀(x, y) : P[output is F(x, y)] ≥ 3/4
◮ Cost:
max(x,y), random outcomes(# bits communicated)
Communication complexity
F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:
◮ Correctness:
∀(x, y) : P[output is F(x, y)] ≥ 3/4
◮ Cost:
max(x,y), random outcomes(# bits communicated)
◮ Complexity:
R(F) = mincorrect protocols(cost)
Classic results
Classic results
R(Inner-Product) = Θ(n) R(Set-Intersection) = Θ(n) R(Gap-Hamming) = Θ(n)
Classic results
R(Inner-Product) = Θ(n) R(Set-Intersection) = Θ(n) R(Gap-Hamming) = Θ(n) R : success probability ≥ 3/4
Classic results
R(Inner-Product) = Θ(n) R(Set-Intersection) = Θ(n) R(Gap-Hamming) = Θ(n) R : success probability ≥ 3/4 Small advantage: R1/2+ǫ : success probability ≥ 1/2 + ǫ
Classic results — revisited
R1/2+ǫ(Inner-Product) = Θ(n) (unless ǫ ≤ 2−Ω(n)) R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) R1/2+ǫ(Gap-Hamming) = Θ(ǫ2 · n)
Classic results — revisited
R1/2+ǫ(Inner-Product) = Θ(n) (unless ǫ ≤ 2−Ω(n)) R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) R1/2+ǫ(Gap-Hamming) = Θ(ǫ2 · n) . . .
- ther functions?
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n)
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨
- ¨
- s–Watson RANDOM’14]
(information complexity) (corruption)
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨
- ¨
- s–Watson RANDOM’14]
(information complexity) (corruption) Σ2P, Π2P:
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨
- ¨
- s–Watson RANDOM’14]
(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n)
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨
- ¨
- s–Watson RANDOM’14]
(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n) Higher levels? (read-once AC0 formulas)
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨
- ¨
- s–Watson RANDOM’14]
(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n) Higher levels? (read-once AC0 formulas) Constant advantage: well-understood [Jayram–Kopparty–Raghavendra/Leonardos–Saks CCC’09]
Climbing the polynomial hierarchy
NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨
- ¨
- s–Watson RANDOM’14]
(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n) Higher levels? (read-once AC0 formulas) Constant advantage: well-understood [Jayram–Kopparty–Raghavendra/Leonardos–Saks CCC’09] Small advantage: open
Function definitions
Set-Intersection:
x x x x x y y
1 1 2 2 3 3 4 4 5 5
y y y
Function definitions
Set-Intersection:
x x x x x y y
1 1 2 2 3 3 4 4 5 5
y y y
Tribes:
x x x x y
2 2 3 3 4 4 5 5
y y y x
6 6
x y
1 1
x
7 7 x 8
x
9 9
y y y n n y
8
What’s known about Tribes?
R(Tribes) = Θ(n)
What’s known about Tribes?
R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound)
What’s known about Tribes?
R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ??
What’s known about Tribes?
R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨
- ¨
- s–Watson] trick:
R1/2+ǫ ≥ Ω(ǫ · corruption bound)
What’s known about Tribes?
R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨
- ¨
- s–Watson] trick:
R1/2+ǫ ≥ Ω(ǫ · corruption bound)
◮ Doesn’t work for Tribes: corruption bound ≈ √n
What’s known about Tribes?
R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨
- ¨
- s–Watson] trick:
R1/2+ǫ ≥ Ω(ǫ · corruption bound)
◮ Doesn’t work for Tribes: corruption bound ≈ √n
?? Similar trick: R1/2+ǫ ≥ Ω(ǫ · smooth rectangle bound) ??
What’s known about Tribes?
R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨
- ¨
- s–Watson] trick:
R1/2+ǫ ≥ Ω(ǫ · corruption bound)
◮ Doesn’t work for Tribes: corruption bound ≈ √n
?? Similar trick: R1/2+ǫ ≥ Ω(ǫ · smooth rectangle bound) ??
◮ Not true in general
Our approach for Tribes
Information complexity:
Our approach for Tribes
Information complexity:
◮ Ω(1)-advantage for Tribes [JKS’03]
Our approach for Tribes
Information complexity:
◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13]
Our approach for Tribes
Information complexity:
◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13] ◮ Combine?
Our approach for Tribes
Information complexity:
◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13] ◮ Combine?
4-step approach:
Our approach for Tribes
Information complexity:
◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13] ◮ Combine?
4-step approach:
- 1. Conditioning and direct sum
- 2. Uniformly covering a pair of gadgets
- 3. Relating information and probabilities for inputs
- 4. Relating information and probabilities for transcripts
Preliminaries
Idea from [BM’13]:
Preliminaries
Idea from [BM’13]: Suffices to use 3Eq gadget instead of And gadget 1 1 1 1
- 1. Conditioning and direct sum
info cost o(ǫ · n)
- info cost o(ǫ)
x x x x y
2 2 3 3 4 4 5 5
y y y x
6 6
x y
1 1
x
7 7 x 8
x
9 9
y y y n n y
8
3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq
- x
2 2
y x y
1 1
3Eq 3Eq
- 1. Conditioning and direct sum
info cost o(ǫ · n)
- info cost o(ǫ)
x x x x y
2 2 3 3 4 4 5 5
y y y x
6 6
x y
1 1
x
7 7 x 8
x
9 9
y y y n n y
8
3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq
- x
2 2
y x y
1 1
3Eq 3Eq
Want to show: advantage ≤ O(info cost)
- 1. Conditioning and direct sum
x
2 2
y x y
1 1
3Eq 3Eq
Want to show: advantage ≤ O(info cost)
- 1. Conditioning and direct sum
x
2 2
y x y
1 1
3Eq 3Eq
Want to show: advantage ≤ O(info cost) Usual info complexity proofs use Pinsker: statistical distance ≤ O( √ mutual info)
- 1. Conditioning and direct sum
x
2 2
y x y
1 1
3Eq 3Eq
Want to show: advantage ≤ O(info cost) Usual info complexity proofs use Pinsker: statistical distance ≤ O( √ mutual info) Instead: exploit symmetry properties of 3Eq to get higher-order terms to “cancel out”
- 2. Uniformly covering a pair of gadgets
- 2. Uniformly covering a pair of gadgets
Uniformly cover with
1 1 − 1 − 1
x
2 2
y x y
1 1
3Eq 3Eq
- 2. Uniformly covering a pair of gadgets
Uniformly cover with
1 1 − 1 − 1
x
2 2
y x y
1 1
3Eq 3Eq
Lemma: Linear combination of acceptance probabilities ≤ O( four contributions to info cost)
- 2. Uniformly covering a pair of gadgets
Uniformly cover with
1 1 − 1 − 1
x
2 2
y x y
1 1
3Eq 3Eq
Lemma: Linear combination of acceptance probabilities ≤ O( four contributions to info cost) Uniform covering + Lemma: advantage ≤ O(info cost)
- 3. Relating information and probabilities for inputs
1 1 − 1 − 1
“Input Lemma”: Linear combination of acceptance probabilities ≤ O( four contributions to info cost) Prove analogous “Transcript Lemma”? (then could sum over transcripts)
- 3. Relating information and probabilities for inputs
[BM’13] transcript lemma: Our transcript lemma:
- 3. Relating information and probabilities for inputs
[BM’13] transcript lemma: Our transcript lemma:
− 1 1 1 − 1
- 3. Relating information and probabilities for inputs
[BM’13] transcript lemma: Our transcript lemma:
− 1 1 1 − 1 − 1 2 2 − 1 2 2 − 1 − 1 −4
- 3. Relating information and probabilities for inputs
[BM’13] transcript lemma: Our transcript lemma:
− 1 1 1 − 1 − 1 2 2 − 1 2 2 − 1 − 1 −4
∀ transcript: contribution to lin comb of prob ≤ O(contribution to info costs)
- 3. Relating information and probabilities for inputs
- 3. Relating information and probabilities for inputs
accepting rejecting rejecting
− 1 2 2 − 1 2 2 − 1 − 1 −4
+ 2 ·
1 − 1 − 1 1
+ 2 ·
1 − 1 − 1 1
- 3. Relating information and probabilities for inputs
accepting rejecting rejecting
− 1 2 2 − 1 2 2 − 1 − 1 −4
+ 2 ·
1 − 1 − 1 1
+ 2 ·
1 − 1 − 1 1
=
1 1 − 1 − 1
- 4. Relating information and probabilities for transcripts
− 1 2 − 1 2 −4 2 − 1 2 − 1
1 2 + δ 1 2 1 2 1 2 1 2 + γ 1 2
- 4. Relating information and probabilities for transcripts
− 1 2 − 1 2 −4 2 − 1 2 − 1
1 2 + δ 1 2 1 2 1 2 1 2 + γ 1 2
lin comb of probabilities = 2 · green area = Θ(δγ)
- 4. Relating information and probabilities for transcripts
− 1 2 − 1 2 −4 2 − 1 2 − 1
1 2 + δ 1 2 1 2 1 2 1 2 + γ 1 2
lin comb of probabilities = 2 · green area = Θ(δγ) ≤ contribution to info costs = Θ(δ2 + γ2)
Generalized Tribes
x x x x y
2 2 3 3 4 4 5 5
y y y x
6 6
x y
1 1
x
7 7 x 8
x
9 9
y y y m l y
8
Generalized Tribes
x x x x y
2 2 3 3 4 4 5 5
y y y x
6 6
x y
1 1
x
7 7 x 8
x
9 9
y y y m l y
8