A Tight Lower Bound for Entropy Flattening
Yi-Hsiu Chen1 Mika G¨
- ¨
- s1
Salil Vadhan1 Jiapeng Zhang2
1Harvard University, USA 2UC San Diego, USA
June 23, 2018
1 / 18
A Tight Lower Bound for Entropy Flattening Yi-Hsiu Chen 1 os 1 Salil - - PowerPoint PPT Presentation
A Tight Lower Bound for Entropy Flattening Yi-Hsiu Chen 1 os 1 Salil Vadhan 1 Jiapeng Zhang 2 Mika G o 1 Harvard University, USA 2 UC San Diego, USA June 23, 2018 1 / 18 Agenda 1 Problem Definition / Model 2 Cryptographic Motivations 3
A Tight Lower Bound for Entropy Flattening
Yi-Hsiu Chen1 Mika G¨
Salil Vadhan1 Jiapeng Zhang2
1Harvard University, USA 2UC San Diego, USA
June 23, 2018
1 / 18
Agenda
1 Problem Definition / Model 2 Cryptographic Motivations 3 Proof Techniques 2 / 18
Flatness
Definition (Entropies) Let X be a distribution over {0, 1}n. Define the surprise of x to be HX(x) = log(1/ Pr [X = x]). Hsh(X) def = E
x∼X [HX(x)] ,
Hmin(X) def = min
x HX(x),
Hmax(X) def = log | Supp X| ≤ max
x
HX(x). Hmin(X) ≤ Hsh(x) ≤ Hmax(X) (The gap can be Θ(n).) A source X is flat iff Hsh(X) = Hmin(X) = Hmax(X).
3 / 18
Entropy Flattening
Input source X Output source Y nearly flat (Hsh(Y ) ≈ Hmin(Y ) ≈ Hmax(Y )) Flattening Algorithm A
4 / 18
Entropy Flattening
Input source X Output source Y nearly flat (Hsh(Y ) ≈ Hmin(Y ) ≈ Hmax(Y )) Flattening Algorithm A Entropies of the output and input sources are monotonically related.
4 / 18
Entropy Flattening
Input source X Output source Y nearly flat (Hsh(Y ) ≈ Hmin(Y ) ≈ Hmax(Y )) Flattening Algorithm A Entropies of the output and input sources are monotonically related.
XL XH YL YH
min sh max min sh max max min Shannon gap min/max gap
flattening
Entropy Flattening
Entropy Flattening Problem Find an flattening algorithm A: If Hsh(X) ≥ τ + 1 , then Hε
min(Y ) ≥ k + ∆.
If Hsh(X) ≤ τ − 1 , then Hε
max(Y ) ≤ k − ∆.
5 / 18
Entropy Flattening
Entropy Flattening Problem Find an flattening algorithm A: If Hsh(X) ≥ τ + 1 , then Hε
min(Y ) ≥ k + ∆.
If Hsh(X) ≤ τ − 1 , then Hε
max(Y ) ≤ k − ∆.
Smooth Entropies Hε
min(Y ) ≥ k if ∃Y ′ s.t. Hmin(Y ) ≥ k and dTV(Y, Y ′) ≤ ε.
Hε
max(Y ) ≤ k if ∃Y ′ s.t. Hmax(Y ) ≤ k and dTV(Y, Y ′) ≤ ε.
5 / 18
Solution: Repetition
Theorem ([HILL99, HR11]) X: a distribution over {0, 1}n. Let Y = (X1, . . . , Xq) where Xis are i.i.d. copies of X. Hε
min(Y ), Hε max(Y ) ∈
Hsh(Y ) ± O
q
6 / 18
Solution: Repetition
Theorem ([HILL99, HR11]) X: a distribution over {0, 1}n. Let Y = (X1, . . . , Xq) where Xis are i.i.d. copies of X. Hε
min(Y ), Hε max(Y ) ∈
Hsh(Y ) ± O
q
q = O(n2) is sufficient for the constant entropy gap. q = Ω(n2) is needed due to anti-concentration results. [HR11]
6 / 18
Query Model
The Model: Input source: encoded by a function f : {0, 1}n → {0, 1}m and defined as f(Un). Flattening algorithm: oracle algorithm Af : {0, 1}n′ → {0, 1}m′ has query access to f. Output source: Af(Un′). Example: Af(r1, . . . , rq) = (f(r1), . . . , f(rq))
7 / 18
Query Model
The Model: Input source: encoded by a function f : {0, 1}n → {0, 1}m and defined as f(Un). Flattening algorithm: oracle algorithm Af : {0, 1}n′ → {0, 1}m′ has query access to f. Output source: Af(Un′). Example: Af(r1, . . . , rq) = (f(r1), . . . , f(rq)) Def: Flattening Algorithm Hsh
f(Un) ≥ τ + 1
⇒ Hε
min
Af(Un′) ≥ k + ∆
Hsh
f(Un) ≤ τ − 1
⇒ Hε
max
Af(Un′) ≤ k − ∆
7 / 18
Query Model
The Model: Input source: encoded by a function f : {0, 1}n → {0, 1}m and defined as f(Un). Flattening algorithm: oracle algorithm Af : {0, 1}n′ → {0, 1}m′ has query access to f. Output source: Af(Un′). Example: Af(r1, . . . , rq) = (f(r1), . . . , f(rq)) Def: Flattening Algorithm Hsh
f(Un) ≥ τ + 1
⇒ Hε
min
Af(Un′) ≥ k + ∆
Hsh
f(Un) ≤ τ − 1
⇒ Hε
max
Af(Un′) ≤ k − ∆
More powerful: Querying correlated positions or even in an adaptive way. Computation on the query inputs. e.g., hashing
7 / 18
Main Theorems
Theorem Flattening algorithms for n-bit oracles f require Ω(n2) oracle queries.
8 / 18
Main Theorems
Theorem Flattening algorithms for n-bit oracles f require Ω(n2) oracle queries. Def: SDU Algorithm Hsh
f(Un) ≥ τ + 1
⇒ dTV
Af(Un′), Um′ < ε.
Hsh
f(Un) ≤ τ − 1
⇒ Supp
Af(Un′) /2m′ ≤ ε.
Flattening Algorithm ⇐ ⇒ SDU Algorithm
(Reduction between two NISZK-complete problems [GSV99])
Theorem SDU algorithms for n-bit oracles f require Ω(n2) oracle queries.
8 / 18
Connection to Cryptographic Constructions
Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):
1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). 3 Flatten entropies. 4 Extract the pseudorandomness (via universal hashing). 9 / 18
Connection to Cryptographic Constructions
Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):
1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). ˜
O(n) queries
3 Flatten entropies. ˜
O(n2) queries
4 Extract the pseudorandomness (via universal hashing).
Overall, the best PRG makes ˜ O(n3) queries to the one-way function [HRV10, VZ13]. From regular one-way function, Step 3 is unnecessary, and so ˜ O(n) query is sufficient. [HHR06]
9 / 18
Connection to Cryptographic Constructions
Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):
1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). ˜
O(n) queries
3 Flatten entropies. ˜
O(n2) queries
4 Extract the pseudorandomness (via universal hashing).
Overall, the best PRG makes ˜ O(n3) queries to the one-way function [HRV10, VZ13]. From regular one-way function, Step 3 is unnecessary, and so ˜ O(n) query is sufficient. [HHR06] Holenstein and Sinha ([HS12]) prove that any black-box construction requires ˜ Ω(n) queries. (From Step 2. Applicable to regular OWF)
9 / 18
Connection to Cryptographic Constructions
Example: OWF f → PRG gf ([HILL90, Hol06, HHR06, HRV10, VZ13]):
1 Create a gap between “pseudoentropy” and (true) entropy. 2 Guess the entropy threshold τ (or other tricks). ˜
O(n) queries
3 Flatten entropies. ˜
O(n2) queries
4 Extract the pseudorandomness (via universal hashing).
Overall, the best PRG makes ˜ O(n3) queries to the one-way function [HRV10, VZ13]. From regular one-way function, Step 3 is unnecessary, and so ˜ O(n) query is sufficient. [HHR06] Holenstein and Sinha ([HS12]) prove that any black-box construction requires ˜ Ω(n) queries. (From Step 2. Applicable to regular OWF) Can we do better in the entropy flattening step?
9 / 18
Overview of the Proof
Def: SDU Algorithm Hsh
f(Un) ≥ τ + 1
⇒ dTV
Af(Un′), Um′ < ε.
Hsh
f(Un) ≤ τ − 1
⇒ Supp
Af(Un′) /2m′ ≤ ε.
1 Construct distributions DH and DL:
Sample f from DH, then Hsh(f(Un)) ≥ τ + 1 w.h.p. Sample f from DL, then Hsh(f(Un)) ≤ τ − 1 w.h.p.
2 A cannot “behave very different” on both distributions by making
10 / 18
Construction of f
Partition the domain into s blocks, each with t elements (s · t = 2n)
Concentrated: map to the same element. Scattered: map to all distinct elements.
f {0, 1}n
↓
{0, 1}m
23n/4 blocks
Sca Con Sca . . . Con Sca
2n/4 elements
11 / 18
Construction of f
Partition the domain into s blocks, each with t elements (s · t = 2n)
Concentrated: map to the same element. Scattered: map to all distinct elements.
f {0, 1}n
↓
{0, 1}m
23n/4 blocks
Sca Con Sca . . . Con Sca
2n/4 elements
≥ s · (1/2 + 4/n) blocks are scattered ⇒ Hsh(f) ≥ 7n/8 + 1 ≤ s · (1/2 − 4/n) blocks are scattered ⇒ Hsh(f) ≤ 7n/8 − 1
11 / 18
DH and DL
f {0, 1}n
↓
{0, 1}m
23n/4 blocks
Sca Con Sca . . . Con Sca
2n/4 elements
1 Randomly partition {0, 1}n into 23n/4 blocks. 12 / 18
DH and DL
f {0, 1}n
↓
{0, 1}m
23n/4 blocks
Sca Con Sca . . . Con Sca
2n/4 elements
1 Randomly partition {0, 1}n into 23n/4 blocks. 2 Decide each block to be scattered or concentrated.
DH: scattered with probability (1/2 + 5/n), then w.h.p, ≥ s · (1/2 + 4/n) blocks are scattered DL: scattered with probability (1/2 − 5/n), then w.h.p, ≤ s · (1/2 − 4/n) blocks are scattered
12 / 18
DH and DL
f {0, 1}n
↓
{0, 1}m
23n/4 blocks
Sca Con Sca . . . Con Sca
2n/4 elements
1 Randomly partition {0, 1}n into 23n/4 blocks. 2 Decide each block to be scattered or concentrated.
DH: scattered with probability (1/2 + 5/n), then w.h.p, ≥ s · (1/2 + 4/n) blocks are scattered DL: scattered with probability (1/2 − 5/n), then w.h.p, ≤ s · (1/2 − 4/n) blocks are scattered
3 Random mapping:
Randomly map each element in a scattered block. Map all t elements in a concentrated block to a random target.
12 / 18
Intuitions for the Hard Distributions
Fix an SDU algorithm A(·). An input w is block-compatible (B.C.) for f if each block is queried (when evaluating Af(w)) at most once.
13 / 18
Intuitions for the Hard Distributions
Fix an SDU algorithm A(·). An input w is block-compatible (B.C.) for f if each block is queried (when evaluating Af(w)) at most once. Why random partition? Hard to make correlated queries. When partitioning in many (23n/4) blocks, it is block-compatible w.h.p over f.
13 / 18
Intuitions for the Hard Distributions
Fix an SDU algorithm A(·). An input w is block-compatible (B.C.) for f if each block is queried (when evaluating Af(w)) at most once. Why random partition? Hard to make correlated queries. When partitioning in many (23n/4) blocks, it is block-compatible w.h.p over f. Why random mapping? Conditioning on B.C., an algorithm cannot distinguish scattered or concentrated blocks. O(n) queries is sufficient if the algorithm knows the block is scattered
13 / 18
Proof Overview
We will focus on the event
14 / 18
Proof Overview
We will focus on the event
By the definition of SDU algorithm, there exists z ∈ {0, 1}m′ (most z), Pr
f∼DH
Pr
f∼DL
14 / 18
Proof Overview
We will focus on the event
By the definition of SDU algorithm, there exists z ∈ {0, 1}m′ (most z), Pr
f∼DH
Pr
f∼DL
Main Technical Lemma Suppose Af algorithm makes q oracle queries, for most z ∈ {0, 1}m′, Pr
f∼DH
n2 ) · Pr
f∼DL
14 / 18
Proof Overview
We will focus on the event
By the definition of SDU algorithm, there exists z ∈ {0, 1}m′ (most z), Pr
f∼DH
Pr
f∼DL
Main Technical Lemma Suppose Af algorithm makes q oracle queries, for most z ∈ {0, 1}m′, Pr
f∼DH
n2 ) · Pr
f∼DL
which concludes that q = Ω(n2) (or Ω(n2 log(1/ε)).
14 / 18
Primitive Intuition for Distinguishing DL and DH
We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability?
15 / 18
Primitive Intuition for Distinguishing DL and DH
We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n
1/2−5/n ≈ 1 + 20/n
q queries: (1 + 20/n)q = 2O(q/n)
15 / 18
Primitive Intuition for Distinguishing DL and DH
We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n
1/2−5/n ≈ 1 + 20/n
q queries: (1 + 20/n)q = 2O(q/n) We can afford more: w.h.p. fraction of scattered blocks ∈
2 ± 6 n
15 / 18
Primitive Intuition for Distinguishing DL and DH
We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n
1/2−5/n ≈ 1 + 20/n
q queries: (1 + 20/n)q = 2O(q/n) We can afford more: w.h.p. fraction of scattered blocks ∈
2 ± 6 n
Conditioning on the “balance” event, the ratio is at most (1 + 20/n)q·(1/2+6/n) × (1 − 20/n)q·(1/2−6/n) ≤ (1 + 20/n)12q/n × (1 − 20/n)−12q/n = 2O(q/n2).
15 / 18
Primitive Intuition for Distinguishing DL and DH
We flip coins to decide each block is scattered or concentrated. DH : Bern(1/2 + 5/n) DL : Bern(1/2 − 5/n) How many queries to distinguish two cases with constant probability? 1 query: 1/2+5/n
1/2−5/n ≈ 1 + 20/n
q queries: (1 + 20/n)q = 2O(q/n) We can afford more: w.h.p. fraction of scattered blocks ∈
2 ± 6 n
Conditioning on the “balance” event, the ratio is at most (1 + 20/n)q·(1/2+6/n) × (1 − 20/n)q·(1/2−6/n) ≤ (1 + 20/n)12q/n × (1 − 20/n)−12q/n = 2O(q/n2). Warning! To distinguish two cases in “NISZK”-sense (instead of BPP) O(n) queries are sufficient.
15 / 18
Comparison to [Lovett Zhang 17]
Entropy reversal: A has to make exponentially many queries such that f(Un) has high entropy ⇒ Af(Un′) has small support. f(Un) has low entropy ⇒ Af(Un′) is close to uniform (They ruled out efficient black-box reduction between SZK and NISZK)
16 / 18
Comparison to [Lovett Zhang 17]
Entropy reversal: A has to make exponentially many queries such that f(Un) has high entropy ⇒ Af(Un′) has small support. f(Un) has low entropy ⇒ Af(Un′) is close to uniform (They ruled out efficient black-box reduction between SZK and NISZK) Lemma (Lemma in [LZ17]) Pr
f∼DH
Pr
f∼DL
Lemma (This work) Pr
f∼DH
n2 ) · Pr
f∼DL
16 / 18
Technical Sketch
Let {w1, . . . , w2n′} = {0, 1}n′ . Wℓ = {w1, . . . , wℓ}. Pr
Pr
Technical Sketch
Let {w1, . . . , w2n′} = {0, 1}n′ . Wℓ = {w1, . . . , wℓ}. Pr
Pr
Pr
Af(w) = z | Af(wℓ) = z
Af(w) =
if w ∈ Wℓ ⊥ Otherwise
17 / 18
Conclusion
We proved the Ω(n2) lower bound for flattening entropy.
18 / 18
Conclusion
We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF.
18 / 18
Conclusion
We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF. Is the step necessary? If Yes ⇒ ˜ Ω(n2) query lower bound for OWF → PRG
18 / 18
Conclusion
We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF. Is the step necessary? If Yes ⇒ ˜ Ω(n2) query lower bound for OWF → PRG Can the lower bound combined with the Ω(n) one in [HS12]? If Yes ⇒ ˜ Ω(n3) query lower bound for OWF → PRG (tight!)
18 / 18
Conclusion
We proved the Ω(n2) lower bound for flattening entropy. Flattening entropy is an important step in constructing PRG, UOWHF and bit commitment from OWF. Is the step necessary? If Yes ⇒ ˜ Ω(n2) query lower bound for OWF → PRG Can the lower bound combined with the Ω(n) one in [HS12]? If Yes ⇒ ˜ Ω(n3) query lower bound for OWF → PRG (tight!)
18 / 18