A Tight Lower Bound for Entropy Flattening Yi-Hsiu Chen 1 os 1 Salil - - PowerPoint PPT Presentation

a tight lower bound for entropy flattening
SMART_READER_LITE
LIVE PREVIEW

A Tight Lower Bound for Entropy Flattening Yi-Hsiu Chen 1 os 1 Salil - - PowerPoint PPT Presentation

A Tight Lower Bound for Entropy Flattening Yi-Hsiu Chen 1 os 1 Salil Vadhan 1 Jiapeng Zhang 2 Mika G o 1 Harvard University, USA 2 UC San Diego, USA June 23, 2018 1 / 18 Agenda 1 Problem Definition / Model 2 Cryptographic Motivations 3


slide-1
SLIDE 1

A Tight Lower Bound for Entropy Flattening

Yi-Hsiu Chen1 Mika G¨

  • ¨
  • s1

Salil Vadhan1 Jiapeng Zhang2

1Harvard University, USA 2UC San Diego, USA

June 23, 2018

1 / 18

slide-2
SLIDE 2

Agenda

1 Problem Definition / Model 2 Cryptographic Motivations 3 Proof Techniques 2 / 18

slide-3
SLIDE 3

Flatness

Definition (Entropies) Let X be a distribution over {0, 1}n. Define the surprise of x to be HX(x) = log(1/ Pr [X = x]). Hsh(X) def = E

x∼X [HX(x)] ,

Hmin(X) def = min

x HX(x),

Hmax(X) def = log | Supp X| ≤ max

x

HX(x). Hmin(X) ≤ Hsh(x) ≤ Hmax(X) (The gap can be Θ(n).) A source X is flat iff Hsh(X) = Hmin(X) = Hmax(X).

3 / 18

slide-4
SLIDE 4

Entropy Flattening

Input source X Output source Y nearly flat (Hsh(Y ) ≈ Hmin(Y ) ≈ Hmax(Y )) Flattening Algorithm A

4 / 18

slide-5
SLIDE 5

Entropy Flattening

Input source X Output source Y nearly flat (Hsh(Y ) ≈ Hmin(Y ) ≈ Hmax(Y )) Flattening Algorithm A Entropies of the output and input sources are monotonically related.

4 / 18

slide-6
SLIDE 6

Entropy Flattening

Input source X Output source Y nearly flat (Hsh(Y ) ≈ Hmin(Y ) ≈ Hmax(Y )) Flattening Algorithm A Entropies of the output and input sources are monotonically related.

XL XH YL YH

min sh max min sh max max min Shannon gap min/max gap

flattening

  • 4 / 18
slide-7
SLIDE 7

Entropy Flattening

Entropy Flattening Problem Find an flattening algorithm A: If Hsh(X) ≥ τ + 1 , then Hε

min(Y ) ≥ k + ∆.

If Hsh(X) ≤ τ − 1 , then Hε

max(Y ) ≤ k − ∆.

5 / 18

slide-8
SLIDE 8

Entropy Flattening

Entropy Flattening Problem Find an flattening algorithm A: If Hsh(X) ≥ τ + 1 , then Hε

min(Y ) ≥ k + ∆.

If Hsh(X) ≤ τ − 1 , then Hε

max(Y ) ≤ k − ∆.

Smooth Entropies Hε

min(Y ) ≥ k if ∃Y ′ s.t. Hmin(Y ) ≥ k and dTV(Y, Y ′) ≤ ε.

max(Y ) ≤ k if ∃Y ′ s.t. Hmax(Y ) ≤ k and dTV(Y, Y ′) ≤ ε.

5 / 18

slide-9
SLIDE 9

Solution: Repetition

Theorem ([HILL99, HR11]) X: a distribution over {0, 1}n. Let Y = (X1, . . . , Xq) where Xis are i.i.d. copies of X. Hε

min(Y ), Hε max(Y ) ∈

Hsh(Y ) ± O

  • n
  • q log(1/ε)
  • q ·
  • Hsh(X) ± O
  • n
  • log(1/ε)

q

  • (Asymptotic Equipartition Property (AEP) in information theory)

6 / 18

slide-10
SLIDE 10

Solution: Repetition

Theorem ([HILL99, HR11]) X: a distribution over {0, 1}n. Let Y = (X1, . . . , Xq) where Xis are i.i.d. copies of X. Hε

min(Y ), Hε max(Y ) ∈

Hsh(Y ) ± O

  • n
  • q log(1/ε)
  • q ·
  • Hsh(X) ± O
  • n
  • log(1/ε)

q

  • (Asymptotic Equipartition Property (AEP) in information theory)

q = O(n2) is sufficient for the constant entropy gap. q = Ω(n2) is needed due to anti-concentration results. [HR11]

6 / 18

slide-11
SLIDE 11

Query Model

The Model: Input source: encoded by a function f : {0, 1}n → {0, 1}m and defined as f(Un). Flattening algorithm: oracle algorithm Af : {0, 1}n′ → {0, 1}m′ has query access to f. Output source: Af(Un′). Example: Af(r1, . . . , rq) = (f(r1), . . . , f(rq))

7 / 18

slide-12
SLIDE 12

Query Model

The Model: Input source: encoded by a function f : {0, 1}n → {0, 1}m and defined as f(Un). Flattening algorithm: oracle algorithm Af : {0, 1}n′ → {0, 1}m′ has query access to f. Output source: Af(Un′). Example: Af(r1, . . . , rq) = (f(r1), . . . , f(rq)) Def: Flattening Algorithm Hsh

f(Un) ≥ τ + 1

⇒ Hε

min

Af(Un′) ≥ k + ∆

Hsh

f(Un) ≤ τ − 1

⇒ Hε

max

Af(Un′) ≤ k − ∆

7 / 18

slide-13
SLIDE 13

Query Model

The Model: Input source: encoded by a function f : {0, 1}n → {0, 1}m and defined as f(Un). Flattening algorithm: oracle algorithm Af : {0, 1}n′ → {0, 1}m′ has query access to f. Output source: Af(Un′). Example: Af(r1, . . . , rq) = (f(r1), . . . , f(rq)) Def: Flattening Algorithm Hsh

f(Un) ≥ τ + 1

⇒ Hε

min

Af(Un′) ≥ k + ∆

Hsh

f(Un) ≤ τ − 1

⇒ Hε

max

Af(Un′) ≤ k − ∆

More powerful: Querying correlated positions or even in an adaptive way. Computation on the query inputs. e.g., hashing

7 / 18

slide-14
SLIDE 14

Main Theorems

Theorem Flattening algorithms for n-bit oracles f require Ω(n2) oracle queries.

8 / 18

slide-15
SLIDE 15

Main Theorems

Theorem Flattening algorithms for n-bit oracles f require Ω(n2) oracle queries. Def: SDU Algorithm Hsh

f(Un) ≥ τ + 1

⇒ dTV

Af(Un′), Um′ < ε.

Hsh

f(Un) ≤ τ − 1

⇒ Supp

Af(Un′) /2m′ ≤ ε.

Flattening Algorithm ⇐ ⇒ SDU Algorithm

(Reduction between two NISZK-complete problems [GSV99])

Theorem SDU algorithms for n-bit oracles f require Ω(n2) oracle queries.

8 / 18

slide-16
SLIDE 16

Connection to Cryptographic Constructions

Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):

1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). 3 Flatten entropies. 4 Extract the pseudorandomness (via universal hashing). 9 / 18

slide-17
SLIDE 17

Connection to Cryptographic Constructions

Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):

1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). ˜

O(n) queries

3 Flatten entropies. ˜

O(n2) queries

4 Extract the pseudorandomness (via universal hashing).

Overall, the best PRG makes ˜ O(n3) queries to the one-way function [HRV10, VZ13]. From regular one-way function, Step 3 is unnecessary, and so ˜ O(n) query is sufficient. [HHR06]

9 / 18

slide-18
SLIDE 18

Connection to Cryptographic Constructions

Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):

1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). ˜

O(n) queries

3 Flatten entropies. ˜

O(n2) queries

4 Extract the pseudorandomness (via universal hashing).

Overall, the best PRG makes ˜ O(n3) queries to the one-way function [HRV10, VZ13]. From regular one-way function, Step 3 is unnecessary, and so ˜ O(n) query is sufficient. [HHR06] Holenstein and Sinha ([HS12]) prove that any black-box construction requires ˜ Ω(n) queries. (From Step 2. Applicable to regular OWF)

9 / 18

slide-19
SLIDE 19

Connection to Cryptographic Constructions

Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):

1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). ˜

O(n) queries

3 Flatten entropies. ˜

O(n2) queries

4 Extract the pseudorandomness (via universal hashing).

Overall, the best PRG makes ˜ O(n3) queries to the one-way function [HRV10, VZ13]. From regular one-way function, Step 3 is unnecessary, and so ˜ O(n) query is sufficient. [HHR06] Holenstein and Sinha ([HS12]) prove that any black-box construction requires ˜ Ω(n) queries. (From Step 2. Applicable to regular OWF) Can we do better in the entropy flattening step?

9 / 18

slide-20
SLIDE 20

Overview of the Proof

Def: SDU Algorithm Hsh

f(Un) ≥ τ + 1

⇒ dTV

Af(Un′), Um′ < ε.

Hsh

f(Un) ≤ τ − 1

⇒ Supp

Af(Un′) /2m′ ≤ ε.

1 Construct distributions DH and DL:

Sample f from DH, then Hsh(f(Un)) ≥ τ + 1 w.h.p. Sample f from DL, then Hsh(f(Un)) ≤ τ − 1 w.h.p.

2 A cannot “behave very different” on both distributions by making

  • nly q = o(n2) queries.

10 / 18

slide-21
SLIDE 21

Construction of f

Partition the domain into s blocks, each with t elements (s · t = 2n)

Concentrated: map to the same element. Scattered: map to all distinct elements.

f {0, 1}n

{0, 1}m

23n/4 blocks

  • Sca

Sca Con Sca . . . Con Sca

2n/4 elements

11 / 18

slide-22
SLIDE 22

Construction of f

Partition the domain into s blocks, each with t elements (s · t = 2n)

Concentrated: map to the same element. Scattered: map to all distinct elements.

f {0, 1}n

{0, 1}m

23n/4 blocks

  • Sca

Sca Con Sca . . . Con Sca

2n/4 elements

≥ s · (1/2 + 4/n) blocks are scattered ⇒ Hsh(f) ≥ 7n/8 + 1 ≤ s · (1/2 − 4/n) blocks are scattered ⇒ Hsh(f) ≤ 7n/8 − 1

11 / 18

slide-23
SLIDE 23

DH and DL

f {0, 1}n

{0, 1}m

23n/4 blocks

  • Sca

Sca Con Sca . . . Con Sca

2n/4 elements

1 Randomly partition {0, 1}n into 23n/4 blocks. 12 / 18

slide-24
SLIDE 24

DH and DL

f {0, 1}n

{0, 1}m

23n/4 blocks

  • Sca

Sca Con Sca . . . Con Sca

2n/4 elements

1 Randomly partition {0, 1}n into 23n/4 blocks. 2 Decide each block to be scattered or concentrated.

DH: scattered with probability (1/2 + 5/n), then w.h.p, ≥ s · (1/2 + 4/n) blocks are scattered DL: scattered with probability (1/2 − 5/n), then w.h.p, ≤ s · (1/2 − 4/n) blocks are scattered

12 / 18

slide-25
SLIDE 25

DH and DL

f {0, 1}n

{0, 1}m

23n/4 blocks

  • Sca

Sca Con Sca . . . Con Sca

2n/4 elements

1 Randomly partition {0, 1}n into 23n/4 blocks. 2 Decide each block to be scattered or concentrated.

DH: scattered with probability (1/2 + 5/n), then w.h.p, ≥ s · (1/2 + 4/n) blocks are scattered DL: scattered with probability (1/2 − 5/n), then w.h.p, ≤ s · (1/2 − 4/n) blocks are scattered

3 Random mapping:

Randomly map each element in a scattered block. Map all t elements in a concentrated block to a random target.

12 / 18

slide-26
SLIDE 26

Intuitions for the Hard Distributions

Fix an SDU algorithm A(·). An input w is block-compatible (B.C.) for f if each block is queried (when evaluating Af(w)) at most once.

13 / 18

slide-27
SLIDE 27

Intuitions for the Hard Distributions

Fix an SDU algorithm A(·). An input w is block-compatible (B.C.) for f if each block is queried (when evaluating Af(w)) at most once. Why random partition? Hard to make correlated queries. When partitioning in many (23n/4) blocks, it is block-compatible w.h.p over f.

13 / 18

slide-28
SLIDE 28

Intuitions for the Hard Distributions

Fix an SDU algorithm A(·). An input w is block-compatible (B.C.) for f if each block is queried (when evaluating Af(w)) at most once. Why random partition? Hard to make correlated queries. When partitioning in many (23n/4) blocks, it is block-compatible w.h.p over f. Why random mapping? Conditioning on B.C., an algorithm cannot distinguish scattered or concentrated blocks. O(n) queries is sufficient if the algorithm knows the block is scattered

  • r concentrated!

13 / 18

slide-29
SLIDE 29

Proof Overview

We will focus on the event

  • ∃B.C. w, Af(w) = z
  • .

14 / 18

slide-30
SLIDE 30

Proof Overview

We will focus on the event

  • ∃B.C. w, Af(w) = z
  • .

By the definition of SDU algorithm, there exists z ∈ {0, 1}m′ (most z), Pr

f∼DH

  • ∃B.C. w, Af(w) = z
  • ≥ 1 − ε ≥ Θ(1)

Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • ≤ ε

14 / 18

slide-31
SLIDE 31

Proof Overview

We will focus on the event

  • ∃B.C. w, Af(w) = z
  • .

By the definition of SDU algorithm, there exists z ∈ {0, 1}m′ (most z), Pr

f∼DH

  • ∃B.C. w, Af(w) = z
  • ≥ 1 − ε ≥ Θ(1)

Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • ≤ ε

Main Technical Lemma Suppose Af algorithm makes q oracle queries, for most z ∈ {0, 1}m′, Pr

f∼DH

  • ∃B.C. w, Af(w) = z
  • ≤ 2O( q

n2 ) · Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • + o(ε)

14 / 18

slide-32
SLIDE 32

Proof Overview

We will focus on the event

  • ∃B.C. w, Af(w) = z
  • .

By the definition of SDU algorithm, there exists z ∈ {0, 1}m′ (most z), Pr

f∼DH

  • ∃B.C. w, Af(w) = z
  • ≥ 1 − ε ≥ Θ(1)

Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • ≤ ε

Main Technical Lemma Suppose Af algorithm makes q oracle queries, for most z ∈ {0, 1}m′, Pr

f∼DH

  • ∃B.C. w, Af(w) = z
  • ≤ 2O( q

n2 ) · Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • + o(ε)

which concludes that q = Ω(n2) (or Ω(n2 log(1/ε)).

14 / 18

slide-33
SLIDE 33

Primitive Intuition for Distinguishing DL and DH

We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability?

15 / 18

slide-34
SLIDE 34

Primitive Intuition for Distinguishing DL and DH

We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n

1/2−5/n ≈ 1 + 20/n

q queries: (1 + 20/n)q = 2O(q/n)

15 / 18

slide-35
SLIDE 35

Primitive Intuition for Distinguishing DL and DH

We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n

1/2−5/n ≈ 1 + 20/n

q queries: (1 + 20/n)q = 2O(q/n) We can afford more: w.h.p. fraction of scattered blocks ∈

  • 1

2 ± 6 n

  • .

15 / 18

slide-36
SLIDE 36

Primitive Intuition for Distinguishing DL and DH

We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n

1/2−5/n ≈ 1 + 20/n

q queries: (1 + 20/n)q = 2O(q/n) We can afford more: w.h.p. fraction of scattered blocks ∈

  • 1

2 ± 6 n

  • .

Conditioning on the “balance” event, the ratio is at most (1 + 20/n)q·(1/2+6/n) × (1 − 20/n)q·(1/2−6/n) ≤ (1 + 20/n)12q/n × (1 − 20/n)−12q/n = 2O(q/n2).

15 / 18

slide-37
SLIDE 37

Primitive Intuition for Distinguishing DL and DH

We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n

1/2−5/n ≈ 1 + 20/n

q queries: (1 + 20/n)q = 2O(q/n) We can afford more: w.h.p. fraction of scattered blocks ∈

  • 1

2 ± 6 n

  • .

Conditioning on the “balance” event, the ratio is at most (1 + 20/n)q·(1/2+6/n) × (1 − 20/n)q·(1/2−6/n) ≤ (1 + 20/n)12q/n × (1 − 20/n)−12q/n = 2O(q/n2). Warning! To distinguish two cases in “NISZK”-sense (instead of BPP) O(n) queries are sufficient.

15 / 18

slide-38
SLIDE 38

Comparison to [Lovett Zhang 17]

Entropy reversal: A has to make exponentially many queries such that f(Un) has high entropy ⇒ Af(Un′) has small support. f(Un) has low entropy ⇒ Af(Un′) is close to uniform (They ruled out efficient black-box reduction between SZK and NISZK)

16 / 18

slide-39
SLIDE 39

Comparison to [Lovett Zhang 17]

Entropy reversal: A has to make exponentially many queries such that f(Un) has high entropy ⇒ Af(Un′) has small support. f(Un) has low entropy ⇒ Af(Un′) is close to uniform (They ruled out efficient black-box reduction between SZK and NISZK) Lemma (Lemma in [LZ17]) Pr

f∼DH

  • ∃B.C. w, Af(w) = z

Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • + negl

Lemma (This work) Pr

f∼DH

  • ∃B.C. w, Af(w) = z
  • ≤ 2O( q

n2 ) · Pr

f∼DL

  • ∃B.C. w, Af(w) = z
  • + negl

16 / 18

slide-40
SLIDE 40

Technical Sketch

Let {w1, . . . , w2n′} = {0, 1}n′ . Wℓ = {w1, . . . , wℓ}. Pr

  • ∃w, Af(w) = z
  • =

Pr

  • wℓ is the “first” w s.t. Af(w) = z
  • 17 / 18
slide-41
SLIDE 41

Technical Sketch

Let {w1, . . . , w2n′} = {0, 1}n′ . Wℓ = {w1, . . . , wℓ}. Pr

  • ∃w, Af(w) = z
  • =

Pr

  • wℓ is the “first” w s.t. Af(w) = z
  • =

Pr

  • ∄w ∈ Wℓ−1 s.t. Af(w) = z | Af(wℓ) = z
  • × Pr
  • Af(wℓ) = z
  • =
  • 1− Pr
  • ∃w, ˜

Af(w) = z | Af(wℓ) = z

  • × Pr
  • Af(wℓ) = z
  • where ˜

Af(w) =

  • Af(w)

if w ∈ Wℓ ⊥ Otherwise

17 / 18

slide-42
SLIDE 42

Conclusion

We proved the Ω(n2) lower bound for flattening entropy.

18 / 18

slide-43
SLIDE 43

Conclusion

We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF.

18 / 18

slide-44
SLIDE 44

Conclusion

We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF. Is the step necessary? If Yes ⇒ ˜ Ω(n2) query lower bound for OWF → PRG

18 / 18

slide-45
SLIDE 45

Conclusion

We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF. Is the step necessary? If Yes ⇒ ˜ Ω(n2) query lower bound for OWF → PRG Can the lower bound combined with the Ω(n) one in [HS12]? If Yes ⇒ ˜ Ω(n3) query lower bound for OWF → PRG (tight!)

18 / 18

slide-46
SLIDE 46

Conclusion

We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF. Is the step necessary? If Yes ⇒ ˜ Ω(n2) query lower bound for OWF → PRG Can the lower bound combined with the Ω(n) one in [HS12]? If Yes ⇒ ˜ Ω(n3) query lower bound for OWF → PRG (tight!)

Thanks!

18 / 18