Communication Complexity with Small Advantage Thomas Watson - - PowerPoint PPT Presentation

communication complexity with small advantage
SMART_READER_LITE
LIVE PREVIEW

Communication Complexity with Small Advantage Thomas Watson - - PowerPoint PPT Presentation

Communication Complexity with Small Advantage Thomas Watson University of Memphis Communication complexity F : { 0 , 1 } n { 0 , 1 } n { 0 , 1 } Communication complexity F : { 0 , 1 } n { 0 , 1 } n { 0 , 1 } (Alice: x ) (Bob: y )


slide-1
SLIDE 1

Communication Complexity with Small Advantage

Thomas Watson

University of Memphis

slide-2
SLIDE 2

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1}

slide-3
SLIDE 3

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) (Bob: y)

slide-4
SLIDE 4

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y)

slide-5
SLIDE 5

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − −

slide-6
SLIDE 6

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − →

slide-7
SLIDE 7

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − −

slide-8
SLIDE 8

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − →

slide-9
SLIDE 9

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y)

slide-10
SLIDE 10

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:

slide-11
SLIDE 11

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:

◮ Correctness:

∀(x, y) : P[output is F(x, y)] ≥ 3/4

slide-12
SLIDE 12

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:

◮ Correctness:

∀(x, y) : P[output is F(x, y)] ≥ 3/4

◮ Cost:

max(x,y), random outcomes(# bits communicated)

slide-13
SLIDE 13

Communication complexity

F : {0, 1}n × {0, 1}n → {0, 1} (Alice: x) − − − − − − − − − → (Bob: y) ← − − − − − − − − − − − − − − − − − − → ← − − − − − − − − − − − − − − − − − − → F(x, y) F(x, y) Randomized protocols:

◮ Correctness:

∀(x, y) : P[output is F(x, y)] ≥ 3/4

◮ Cost:

max(x,y), random outcomes(# bits communicated)

◮ Complexity:

R(F) = mincorrect protocols(cost)

slide-14
SLIDE 14

Classic results

slide-15
SLIDE 15

Classic results

R(Inner-Product) = Θ(n) R(Set-Intersection) = Θ(n) R(Gap-Hamming) = Θ(n)

slide-16
SLIDE 16

Classic results

R(Inner-Product) = Θ(n) R(Set-Intersection) = Θ(n) R(Gap-Hamming) = Θ(n) R : success probability ≥ 3/4

slide-17
SLIDE 17

Classic results

R(Inner-Product) = Θ(n) R(Set-Intersection) = Θ(n) R(Gap-Hamming) = Θ(n) R : success probability ≥ 3/4 Small advantage: R1/2+ǫ : success probability ≥ 1/2 + ǫ

slide-18
SLIDE 18

Classic results — revisited

R1/2+ǫ(Inner-Product) = Θ(n) (unless ǫ ≤ 2−Ω(n)) R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) R1/2+ǫ(Gap-Hamming) = Θ(ǫ2 · n)

slide-19
SLIDE 19

Classic results — revisited

R1/2+ǫ(Inner-Product) = Θ(n) (unless ǫ ≤ 2−Ω(n)) R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) R1/2+ǫ(Gap-Hamming) = Θ(ǫ2 · n) . . .

  • ther functions?
slide-20
SLIDE 20

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n)

slide-21
SLIDE 21

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨

  • ¨
  • s–Watson RANDOM’14]

(information complexity) (corruption)

slide-22
SLIDE 22

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨

  • ¨
  • s–Watson RANDOM’14]

(information complexity) (corruption) Σ2P, Π2P:

slide-23
SLIDE 23

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨

  • ¨
  • s–Watson RANDOM’14]

(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n)

slide-24
SLIDE 24

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨

  • ¨
  • s–Watson RANDOM’14]

(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n) Higher levels? (read-once AC0 formulas)

slide-25
SLIDE 25

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨

  • ¨
  • s–Watson RANDOM’14]

(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n) Higher levels? (read-once AC0 formulas) Constant advantage: well-understood [Jayram–Kopparty–Raghavendra/Leonardos–Saks CCC’09]

slide-26
SLIDE 26

Climbing the polynomial hierarchy

NP: R1/2+ǫ(Set-Intersection) = Θ(ǫ · n) [Braverman–Moitra STOC’13, G¨

  • ¨
  • s–Watson RANDOM’14]

(information complexity) (corruption) Σ2P, Π2P: R1/2+ǫ(Tribes) = Θ(ǫ · n) Higher levels? (read-once AC0 formulas) Constant advantage: well-understood [Jayram–Kopparty–Raghavendra/Leonardos–Saks CCC’09] Small advantage: open

slide-27
SLIDE 27

Function definitions

Set-Intersection:

x x x x x y y

1 1 2 2 3 3 4 4 5 5

y y y

slide-28
SLIDE 28

Function definitions

Set-Intersection:

x x x x x y y

1 1 2 2 3 3 4 4 5 5

y y y

Tribes:

x x x x y

2 2 3 3 4 4 5 5

y y y x

6 6

x y

1 1

x

7 7 x 8

x

9 9

y y y n n y

8

slide-29
SLIDE 29

What’s known about Tribes?

R(Tribes) = Θ(n)

slide-30
SLIDE 30

What’s known about Tribes?

R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound)

slide-31
SLIDE 31

What’s known about Tribes?

R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ??

slide-32
SLIDE 32

What’s known about Tribes?

R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨

  • ¨
  • s–Watson] trick:

R1/2+ǫ ≥ Ω(ǫ · corruption bound)

slide-33
SLIDE 33

What’s known about Tribes?

R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨

  • ¨
  • s–Watson] trick:

R1/2+ǫ ≥ Ω(ǫ · corruption bound)

◮ Doesn’t work for Tribes: corruption bound ≈ √n

slide-34
SLIDE 34

What’s known about Tribes?

R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨

  • ¨
  • s–Watson] trick:

R1/2+ǫ ≥ Ω(ǫ · corruption bound)

◮ Doesn’t work for Tribes: corruption bound ≈ √n

?? Similar trick: R1/2+ǫ ≥ Ω(ǫ · smooth rectangle bound) ??

slide-35
SLIDE 35

What’s known about Tribes?

R(Tribes) = Θ(n) [Jayram–Kumar–Sivakumar STOC’03, Harsha–Jain FSTTCS’13] (information complexity) (smooth rectangle bound) R1/2+ǫ(Tribes) = ?? [G¨

  • ¨
  • s–Watson] trick:

R1/2+ǫ ≥ Ω(ǫ · corruption bound)

◮ Doesn’t work for Tribes: corruption bound ≈ √n

?? Similar trick: R1/2+ǫ ≥ Ω(ǫ · smooth rectangle bound) ??

◮ Not true in general

slide-36
SLIDE 36

Our approach for Tribes

Information complexity:

slide-37
SLIDE 37

Our approach for Tribes

Information complexity:

◮ Ω(1)-advantage for Tribes [JKS’03]

slide-38
SLIDE 38

Our approach for Tribes

Information complexity:

◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13]

slide-39
SLIDE 39

Our approach for Tribes

Information complexity:

◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13] ◮ Combine?

slide-40
SLIDE 40

Our approach for Tribes

Information complexity:

◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13] ◮ Combine?

4-step approach:

slide-41
SLIDE 41

Our approach for Tribes

Information complexity:

◮ Ω(1)-advantage for Tribes [JKS’03] ◮ ǫ-advantage for Set-Inter [BM’13] ◮ Combine?

4-step approach:

  • 1. Conditioning and direct sum
  • 2. Uniformly covering a pair of gadgets
  • 3. Relating information and probabilities for inputs
  • 4. Relating information and probabilities for transcripts
slide-42
SLIDE 42

Preliminaries

Idea from [BM’13]:

slide-43
SLIDE 43

Preliminaries

Idea from [BM’13]: Suffices to use 3Eq gadget instead of And gadget   1 1 1   1

slide-44
SLIDE 44
  • 1. Conditioning and direct sum

info cost o(ǫ · n)

  • info cost o(ǫ)

x x x x y

2 2 3 3 4 4 5 5

y y y x

6 6

x y

1 1

x

7 7 x 8

x

9 9

y y y n n y

8

3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq

  • x

2 2

y x y

1 1

3Eq 3Eq

slide-45
SLIDE 45
  • 1. Conditioning and direct sum

info cost o(ǫ · n)

  • info cost o(ǫ)

x x x x y

2 2 3 3 4 4 5 5

y y y x

6 6

x y

1 1

x

7 7 x 8

x

9 9

y y y n n y

8

3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq 3Eq

  • x

2 2

y x y

1 1

3Eq 3Eq

Want to show: advantage ≤ O(info cost)

slide-46
SLIDE 46
  • 1. Conditioning and direct sum

x

2 2

y x y

1 1

3Eq 3Eq

Want to show: advantage ≤ O(info cost)

slide-47
SLIDE 47
  • 1. Conditioning and direct sum

x

2 2

y x y

1 1

3Eq 3Eq

Want to show: advantage ≤ O(info cost) Usual info complexity proofs use Pinsker: statistical distance ≤ O( √ mutual info)

slide-48
SLIDE 48
  • 1. Conditioning and direct sum

x

2 2

y x y

1 1

3Eq 3Eq

Want to show: advantage ≤ O(info cost) Usual info complexity proofs use Pinsker: statistical distance ≤ O( √ mutual info) Instead: exploit symmetry properties of 3Eq to get higher-order terms to “cancel out”

slide-49
SLIDE 49
  • 2. Uniformly covering a pair of gadgets
slide-50
SLIDE 50
  • 2. Uniformly covering a pair of gadgets

Uniformly cover with

1 1 − 1 − 1

x

2 2

y x y

1 1

3Eq 3Eq

slide-51
SLIDE 51
  • 2. Uniformly covering a pair of gadgets

Uniformly cover with

1 1 − 1 − 1

x

2 2

y x y

1 1

3Eq 3Eq

Lemma: Linear combination of acceptance probabilities ≤ O( four contributions to info cost)

slide-52
SLIDE 52
  • 2. Uniformly covering a pair of gadgets

Uniformly cover with

1 1 − 1 − 1

x

2 2

y x y

1 1

3Eq 3Eq

Lemma: Linear combination of acceptance probabilities ≤ O( four contributions to info cost) Uniform covering + Lemma: advantage ≤ O(info cost)

slide-53
SLIDE 53
  • 3. Relating information and probabilities for inputs

1 1 − 1 − 1

“Input Lemma”: Linear combination of acceptance probabilities ≤ O( four contributions to info cost) Prove analogous “Transcript Lemma”? (then could sum over transcripts)

slide-54
SLIDE 54
  • 3. Relating information and probabilities for inputs

[BM’13] transcript lemma: Our transcript lemma:

slide-55
SLIDE 55
  • 3. Relating information and probabilities for inputs

[BM’13] transcript lemma: Our transcript lemma:

− 1 1 1 − 1

slide-56
SLIDE 56
  • 3. Relating information and probabilities for inputs

[BM’13] transcript lemma: Our transcript lemma:

− 1 1 1 − 1 − 1 2 2 − 1 2 2 − 1 − 1 −4

slide-57
SLIDE 57
  • 3. Relating information and probabilities for inputs

[BM’13] transcript lemma: Our transcript lemma:

− 1 1 1 − 1 − 1 2 2 − 1 2 2 − 1 − 1 −4

∀ transcript: contribution to lin comb of prob ≤ O(contribution to info costs)

slide-58
SLIDE 58
  • 3. Relating information and probabilities for inputs
slide-59
SLIDE 59
  • 3. Relating information and probabilities for inputs

accepting rejecting rejecting

− 1 2 2 − 1 2 2 − 1 − 1 −4

+ 2 ·

1 − 1 − 1 1

+ 2 ·

1 − 1 − 1 1

slide-60
SLIDE 60
  • 3. Relating information and probabilities for inputs

accepting rejecting rejecting

− 1 2 2 − 1 2 2 − 1 − 1 −4

+ 2 ·

1 − 1 − 1 1

+ 2 ·

1 − 1 − 1 1

=

1 1 − 1 − 1

slide-61
SLIDE 61
  • 4. Relating information and probabilities for transcripts

− 1 2 − 1 2 −4 2 − 1 2 − 1

1 2 + δ 1 2 1 2 1 2 1 2 + γ 1 2

slide-62
SLIDE 62
  • 4. Relating information and probabilities for transcripts

− 1 2 − 1 2 −4 2 − 1 2 − 1

1 2 + δ 1 2 1 2 1 2 1 2 + γ 1 2

lin comb of probabilities = 2 · green area = Θ(δγ)

slide-63
SLIDE 63
  • 4. Relating information and probabilities for transcripts

− 1 2 − 1 2 −4 2 − 1 2 − 1

1 2 + δ 1 2 1 2 1 2 1 2 + γ 1 2

lin comb of probabilities = 2 · green area = Θ(δγ) ≤ contribution to info costs = Θ(δ2 + γ2)

slide-64
SLIDE 64

Generalized Tribes

x x x x y

2 2 3 3 4 4 5 5

y y y x

6 6

x y

1 1

x

7 7 x 8

x

9 9

y y y m l y

8

slide-65
SLIDE 65

Generalized Tribes

x x x x y

2 2 3 3 4 4 5 5

y y y x

6 6

x y

1 1

x

7 7 x 8

x

9 9

y y y m l y

8

Open: Small-advantage complexity when ℓ is small

slide-66
SLIDE 66

The end