Better Time-Space Lower Bounds for SAT and Related Problems Ryan - - PowerPoint PPT Presentation

better time space lower bounds for sat and related
SMART_READER_LITE
LIVE PREVIEW

Better Time-Space Lower Bounds for SAT and Related Problems Ryan - - PowerPoint PPT Presentation

Better Time-Space Lower Bounds for SAT and Related Problems Ryan Williams Carnegie Mellon University June 12, 2005 0-0 Introduction Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted


slide-1
SLIDE 1

Better Time-Space Lower Bounds for SAT and Related Problems

Ryan Williams Carnegie Mellon University June 12, 2005

0-0

slide-2
SLIDE 2

Introduction

Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model)

1

slide-3
SLIDE 3

Introduction

Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using no(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover,

  • etc. [Raz and Van Melkebeek]

1-a

slide-4
SLIDE 4

Introduction

Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using no(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover,

  • etc. [Raz and Van Melkebeek]

Previous time lower bounds for SAT (assuming no(1) space):

  • ω(n)

[Kannan 84]

1-b

slide-5
SLIDE 5

Introduction

Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using no(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover,

  • etc. [Raz and Van Melkebeek]

Previous time lower bounds for SAT (assuming no(1) space):

  • ω(n)

[Kannan 84]

  • Ω(n1+ε) for some ε > 0

[Fortnow 97]

1-c

slide-6
SLIDE 6

Introduction

Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using no(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover,

  • etc. [Raz and Van Melkebeek]

Previous time lower bounds for SAT (assuming no(1) space):

  • ω(n)

[Kannan 84]

  • Ω(n1+ε) for some ε > 0

[Fortnow 97]

  • Ω(n

√ 2−ε) for all ε > 0

[Lipton and Viglas 99]

1-d

slide-7
SLIDE 7

Introduction

Few super-linear time lower bounds known for natural problems in NP (Existing ones generally use a restricted computational model) We will show time lower bounds for the SAT problem, on random-access machines using no(1) space These lower bounds carry over to MAX-SAT, Hamilton Path, Vertex Cover,

  • etc. [Raz and Van Melkebeek]

Previous time lower bounds for SAT (assuming no(1) space):

  • ω(n)

[Kannan 84]

  • Ω(n1+ε) for some ε > 0

[Fortnow 97]

  • Ω(n

√ 2−ε) for all ε > 0

[Lipton and Viglas 99]

  • Ω(nφ−ε) where φ = 1.618...

[Fortnow and Van Melkebeek 00]

1-e

slide-8
SLIDE 8

Our Main Result √ 2 and φ are nice constants...

The constant of our work will be (larger, but) not-so-nice.

2

slide-9
SLIDE 9

Our Main Result √ 2 and φ are nice constants...

The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires nΥk time (infinitely often) on a random-access machine using no(1) workspace, where Υk is the positive solution in (1, 2) to

Υk

3(Υk − 1) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3).

2-a

slide-10
SLIDE 10

Our Main Result √ 2 and φ are nice constants...

The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires nΥk time (infinitely often) on a random-access machine using no(1) workspace, where Υk is the positive solution in (1, 2) to

Υk

3(Υk − 1) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3).

Define Υ := limk→∞ Υk. Then: nΥ−ε lower bound.

2-b

slide-11
SLIDE 11

Our Main Result √ 2 and φ are nice constants...

The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires nΥk time (infinitely often) on a random-access machine using no(1) workspace, where Υk is the positive solution in (1, 2) to

Υk

3(Υk − 1) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3).

Define Υ := limk→∞ Υk. Then: nΥ−ε lower bound. (Note: the Υ stands for ‘Ugly’)

2-c

slide-12
SLIDE 12

Our Main Result √ 2 and φ are nice constants...

The constant of our work will be (larger, but) not-so-nice. Theorem: For all k, SAT requires nΥk time (infinitely often) on a random-access machine using no(1) workspace, where Υk is the positive solution in (1, 2) to

Υk

3(Υk − 1) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3).

Define Υ := limk→∞ Υk. Then: nΥ−ε lower bound. (Note: the Υ stands for ‘Ugly’) However, Υ ≈

√ 3 +

6 10000, so we’ll present the result with

√ 3.

2-d

slide-13
SLIDE 13

Points About The Method We Use

  • The theorem says for any sufficiently restricted machine, there is an

infinite set of SAT instances it cannot solve correctly We will not construct such a set of instances for every machine! Proof is by contradiction: it would be absurd, if such a machine could solve SAT almost everywhere

  • Ours and the above cited methods use artificial computational

models (alternating machines) to prove lower bounds for explicit problems in a realistic model

3

slide-14
SLIDE 14

Outline

  • Preliminaries and Proof Strategy
  • A Speed-Up Theorem

(small-space computations can be accelerated using alternation)

  • A Slow-Down Lemma

(NTIME can be efficiently simulated implies ΣkTIME can be efficiently simulated with some slow-down)

  • Lipton and Viglas’ n

√ 2 Lower Bound

(the starting point for our approach)

  • Our Inductive Argument

(how to derive a better bound from Lipton-Viglas)

  • From n1.66 to n1.732

(a subtle argument that squeezes more from the induction)

4

slide-15
SLIDE 15

Preliminaries: Two possibly obscure complexity classes

  • DTISP[t, s] is deterministic time t and space s, simultaneously

(Note DTISP[t, s] = DTIME[t] ∩ SPACE[s] in general) We will be looking at DTISP[nk, no(1)] for k ≥ 1.

  • NQL :=

c≥0 NTIME[n(log n)c] = NTIME[n · poly(log n)]

The NQL stands for “nondeterministic quasi-linear time”

5

slide-16
SLIDE 16

Preliminaries: SAT Facts

Satisfiability (SAT) is not only NP-complete, but also: Theorem: [Cook 85, Gurevich and Shelah 89, Tourlakis 00] SAT is NQL-complete, under reductions doable in O(n · poly(log n)) time and O(log n) space (simultaneously). Moreover the ith bit of the reduction can be computed in O(poly(log n)) time. Let D be closed under quasi-linear time, logspace reductions.

6

slide-17
SLIDE 17

Preliminaries: SAT Facts

Satisfiability (SAT) is not only NP-complete, but also: Theorem: [Cook 85, Gurevich and Shelah 89, Tourlakis 00] SAT is NQL-complete, under reductions doable in O(n · poly(log n)) time and O(log n) space (simultaneously). Moreover the ith bit of the reduction can be computed in O(poly(log n)) time. Let D be closed under quasi-linear time, logspace reductions. Corollary: If NTIME[n] D, then SAT /

∈ D.

If one can show NTIME[n] is not contained in some D, then one can name an explicit problem (SAT) not in D (modulo polylog factors)

6-a

slide-18
SLIDE 18

Preliminaries: Some Hierarchy Theorems

For reasonable t(n) ≥ n,

NTIME[t] coNTIME[o(t)].

Furthermore, for integers k ≥ 1,

ΣkTIME[t] ΠkTIME[o(t)].

So, there’s a tight time hierarchy within levels of the polynomial hierarchy.

7

slide-19
SLIDE 19

Proof Strategy

Show if SAT has a sufficiently good algorithm, then one contradicts a hierarchy theorem.

8

slide-20
SLIDE 20

Proof Strategy

Show if SAT has a sufficiently good algorithm, then one contradicts a hierarchy theorem. Strategy of Prior work:

  • 1. Show that DTISP[nc, no(1)] can be “sped-up” when simulated on an

alternating machine

  • 2. Show that NTIME[n] ⊆ DTISP[nc, no(1)] allows those alternations

to be “removed” without much “slow-down”

  • 3. Contradict a hierarchy theorem for small c

8-a

slide-21
SLIDE 21

Proof Strategy

Show if SAT has a sufficiently good algorithm, then one contradicts a hierarchy theorem. Strategy of Prior work:

  • 1. Show that DTISP[nc, no(1)] can be “sped-up” when simulated on an

alternating machine

  • 2. Show that NTIME[n] ⊆ DTISP[nc, no(1)] allows those alternations

to be “removed” without much “slow-down”

  • 3. Contradict a hierarchy theorem for small c

Our proof will use the Σk time versus Πk time hierarchy, for all k

8-b

slide-22
SLIDE 22

Outline

  • Preliminaries
  • A Speed-Up Theorem
  • A Slow-Down Lemma
  • Lipton and Viglas’ n

√ 2 Lower Bound

  • Our Inductive Argument
  • From n1.66 to n1.732

9

slide-23
SLIDE 23

A Speed-Up Theorem (Trading Time for Alternations)

Let:

  • t(n) = nc for rational c ≥ 1,
  • s(n) be no(1), and
  • k ≥ 2 be an integer.

Theorem: [Fortnow and Van Melkebeek 00] [Kannan 83]

DTISP[t, s] ⊆ ΣkTIME[t1/k+o(1)] ∩ ΠkTIME[t1/k+o(1)].

10

slide-24
SLIDE 24

A Speed-Up Theorem (Trading Time for Alternations)

Let:

  • t(n) = nc for rational c ≥ 1,
  • s(n) be no(1), and
  • k ≥ 2 be an integer.

Theorem: [Fortnow and Van Melkebeek 00] [Kannan 83]

DTISP[t, s] ⊆ ΣkTIME[t1/k+o(1)] ∩ ΠkTIME[t1/k+o(1)].

That is, for any machine M running in time t and using small workspace, there is an alternating machine M ′ that makes k alternations and takes roughly

k

√ t time.

Moreover, M ′ can start in either an existential or a universal state

10-a

slide-25
SLIDE 25

Proof of the speed-up theorem

Let x be input, M be the small space machine to simulate Goal: Write a clever sentence in first-order logic with k (alternating) quantifiers that is equivalent to M(x) accepting

11

slide-26
SLIDE 26

Proof of the speed-up theorem

Let x be input, M be the small space machine to simulate Goal: Write a clever sentence in first-order logic with k (alternating) quantifiers that is equivalent to M(x) accepting Let Cj denote configuration of M(x) after jth step: bit string encoding head positions, workspace contents, finite control By space assumption on M, |Cj| ∈ no(1)

11-a

slide-27
SLIDE 27

Proof of the speed-up theorem

Let x be input, M be the small space machine to simulate Goal: Write a clever sentence in first-order logic with k (alternating) quantifiers that is equivalent to M(x) accepting Let Cj denote configuration of M(x) after jth step: bit string encoding head positions, workspace contents, finite control By space assumption on M, |Cj| ∈ no(1)

M(x) accepts iff there is a sequence C1, C2, . . . , Ct where

  • C1 is the “initial” configuration,
  • Ct is in “accept” state
  • For all i, Ci leads to Ci+1 in one step of M on input x.

11-b

slide-28
SLIDE 28

Proof of speed-up theorem: The case k = 2 M(x) accepts iff (∃C0, C√

t, C2 √ t, . . . , Ct)

(∀i ∈ {1, . . . , √ t})

[Ci·

√ t leads to C(i+1)· √ t in

√ t steps, C0 is initial, Ct is accepting]

12

slide-29
SLIDE 29

Proof of speed-up theorem: The case k = 2 M(x) accepts iff (∃C0, C√

t, C2 √ t, . . . , Ct)

(∀i ∈ {1, . . . , √ t})

[Ci·

√ t leads to C(i+1)· √ t in

√ t steps, C0 is initial, Ct is accepting]

Runtime on an alternating machine:

  • ∃ takes O(

√ t · s) = t1/2+o(1) time to write down the Cj’s

  • ∀ takes O(log t) time to write down i
  • [· · · ] takes O(

√ t · s) deterministic time to check

Two alternations, square root speedup

12-a

slide-30
SLIDE 30

Proof of speed-up theorem: The k = 3 case, first attempt

For k = 2, we had

(∃C0, C√

t, C2 √ t, . . . , Ct)

(∀i ∈ {0, 1, . . . , √ t})

[Ci·

√ t leads to C(i+1)· √ t in

√ t steps, C0 initial, Ct accepting]

13

slide-31
SLIDE 31

Proof of speed-up theorem: The k = 3 case, first attempt

For k = 2, we had

(∃C0, C√

t, C2 √ t, . . . , Ct)

(∀i ∈ {0, 1, . . . , √ t})

[Ci·

√ t leads to C(i+1)· √ t in

√ t steps, C0 initial, Ct accepting]

Observation: The [· · · ] is an O(

√ t) time and small-space

computation, thus we can speed it up by a square root as well

13-a

slide-32
SLIDE 32

Proof of speed-up theorem: The k = 3 case, first attempt

For k = 2, we had

(∃C0, C√

t, C2 √ t, . . . , Ct)

(∀i ∈ {0, 1, . . . , √ t})

[Ci·

√ t leads to C(i+1)· √ t in

√ t steps, C0 initial, Ct accepting]

Observation: The [· · · ] is an O(

√ t) time and small-space

computation, thus we can speed it up by a square root as well Straightforward way of doing this leads to:

(∃C0, Ct2/3, C2·t2/3, . . . , Ct)(∀i ∈ {0, 1, . . . , t1/3}) (∃Ci·t2/3+t1/3, Ci·t2/3+2·t1/3, . . . , C(i+1)·t2/3)(∀j ∈ {1, . . . , √ t})

[Ci·t2/3+j·t1/3 leads to Ci·t2/3+(j+1)·t1/3 in t1/3 steps, C0 initial, Ct accepting]

13-b

slide-33
SLIDE 33

k = 2 has one “stage”

t1/2 t 1/2

. . . . C C C Ct

2

slide-34
SLIDE 34

k = 2 has one “stage”

t1/2 t 1/2

. . . . C C C Ct

2 2t2/3

C0

2/3

t

2/3

t t1/3 2t2/3

2/3

t

2/3

t t1/3 2

. . . . C . . . . Ct C C C C C

+ +

k = 3 has two “stages”

slide-35
SLIDE 35

Proof of speed-up theorem: Not quite enough for k = 3

The k = 3 sentence we gave uses four quantifiers, for only t1/3 time (we want only three quantifier blocks)

15

slide-36
SLIDE 36

Proof of speed-up theorem: Not quite enough for k = 3

The k = 3 sentence we gave uses four quantifiers, for only t1/3 time (we want only three quantifier blocks) Idea: Take advantage of the computation’s determinism – only one possible configuration at any step

15-a

slide-37
SLIDE 37

Proof of speed-up theorem: Not quite enough for k = 3

The k = 3 sentence we gave uses four quantifiers, for only t1/3 time (we want only three quantifier blocks) Idea: Take advantage of the computation’s determinism – only one possible configuration at any step The acceptance condition for M(x) can be complemented:

M(x) accepts iff (∀C0, C√

t, C2 √ t, . . . , Ct rejecting)

(∃i ∈ {1, . . . , √ t})

[Ci·

√ t does not lead to Ci· √ t+ √ t in

√ t steps]

“For all configuration sequences C1, . . . , Ct where Ct is rejecting, there exists a configuration Ci that does not lead to Ci+1”

15-b

slide-38
SLIDE 38

The k = 3 case

We can therefore rewrite the k = 3 case, from

(∃C0, Ct2/3, C2·t2/3, . . . , Ct accepting)(∀i ∈ {0, 1, . . . , t1/3}) (∃Ci·t2/3+t1/3, Ci·t2/3+2·t1/3, . . . , C(i+1)·t2/3)(∀j ∈ {1, . . . , √ t})

[Ci·t2/3+j·t1/3 leads to Ci·t2/3+(j+1)·t1/3 in t1/3 steps] to:

16

slide-39
SLIDE 39

The k = 3 case

We can therefore rewrite the k = 3 case, from

(∃C0, Ct2/3, C2·t2/3, . . . , Ct accepting)(∀i ∈ {0, 1, . . . , t1/3}) (∃Ci·t2/3+t1/3, Ci·t2/3+2·t1/3, . . . , C(i+1)·t2/3)(∀j ∈ {1, . . . , √ t})

[Ci·t2/3+j·t1/3 leads to Ci·t2/3+(j+1)·t1/3 in t1/3 steps] to:

(∃C0, Ct2/3, C2·t2/3, . . . , Ct accepting)(∀i ∈ {0, 1, . . . , t1/3}) (∀Ci·t2/3+t1/3, . . . , C(i+1)·t2/3−t1/3, C′

(i+1)·t2/3 = C(i+1)·t2/3)

(∃j ∈ {1, . . . , √ t})

[Ci·t2/3+j·t1/3 does not lead to Ci·t2/3+(j+1)·t1/3 in t1/3 steps]

16-a

slide-40
SLIDE 40

The k = 3 case

We can therefore rewrite the k = 3 case, from

(∃C0, Ct2/3, C2·t2/3, . . . , Ct accepting)(∀i ∈ {0, 1, . . . , t1/3}) (∃Ci·t2/3+t1/3, Ci·t2/3+2·t1/3, . . . , C(i+1)·t2/3)(∀j ∈ {1, . . . , √ t})

[Ci·t2/3+j·t1/3 leads to Ci·t2/3+(j+1)·t1/3 in t1/3 steps] to:

(∃C0, Ct2/3, C2·t2/3, . . . , Ct accepting)(∀i ∈ {0, 1, . . . , t1/3}) (∀Ci·t2/3+t1/3, . . . , C(i+1)·t2/3−t1/3, C′

(i+1)·t2/3 = C(i+1)·t2/3)

(∃j ∈ {1, . . . , √ t})

[Ci·t2/3+j·t1/3 does not lead to Ci·t2/3+(j+1)·t1/3 in t1/3 steps] Voila! Three quantifier blocks. This is in Σ3TIME[t1/3+o(1)] (and similarly one can show it’s in Π3TIME[t1/3+o(1)])

16-b

slide-41
SLIDE 41

This can be generalized...

For arbitrary k ≥ 3, one simply guesses (existentially or universally) t1/k configurations at each stage

17

slide-42
SLIDE 42

This can be generalized...

For arbitrary k ≥ 3, one simply guesses (existentially or universally) t1/k configurations at each stage

  • Inverting quantifiers means the number of alternations only increases

by one for every stage

(∃ ∀) (∀ ∃) (∃ ∀) · · ·

17-a

slide-43
SLIDE 43

This can be generalized...

For arbitrary k ≥ 3, one simply guesses (existentially or universally) t1/k configurations at each stage

  • Inverting quantifiers means the number of alternations only increases

by one for every stage

(∃ ∀) (∀ ∃) (∃ ∀) · · ·

  • There are k − 1 stages of guessing t1/k configurations, then t1/k time

to deterministically verify configurations

17-b

slide-44
SLIDE 44

Outline

  • Preliminaries
  • A Speed-Up Theorem
  • A Slow-Down Lemma
  • Lipton and Viglas’ n

√ 2 Lower Bound

  • Our Inductive Argument
  • From n1.66 to n1.732

18

slide-45
SLIDE 45

A Slow-Down Lemma (Trading Alternations For Time)

Main Idea: The assumption NTIME[n] ⊆ DTIME[nc] allows one to remove alternations from a computation, with a small time increase

19

slide-46
SLIDE 46

A Slow-Down Lemma (Trading Alternations For Time)

Main Idea: The assumption NTIME[n] ⊆ DTIME[nc] allows one to remove alternations from a computation, with a small time increase Let t(n) ≥ n be a polynomial, c ≥ 1. Lemma: If NTIME[n] ⊆ DTIME[nc] then for all k ≥ 1,

ΣkTIME[t] ⊆ Σk−1TIME[tc].

We prove the following, which will be very useful in our final proof.

19-a

slide-47
SLIDE 47

A Slow-Down Lemma (Trading Alternations For Time)

Main Idea: The assumption NTIME[n] ⊆ DTIME[nc] allows one to remove alternations from a computation, with a small time increase Let t(n) ≥ n be a polynomial, c ≥ 1. Lemma: If NTIME[n] ⊆ DTIME[nc] then for all k ≥ 1,

ΣkTIME[t] ⊆ Σk−1TIME[tc].

We prove the following, which will be very useful in our final proof. Theorem: If ΣkTIME[n] ⊆ ΠkTIME[nc] then

Σk+1TIME[t] ⊆ ΣkTIME[tc].

19-b

slide-48
SLIDE 48

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Let M be a Σk+1 machine running in time t

20

slide-49
SLIDE 49

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Let M be a Σk+1 machine running in time t Recall M(x) can be characterized by a first-order sentence:

(∃x1, |x1| ≤ t(|x|))(∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|))[P(x, x1, x2, . . . , xk+1)]

where P “runs” in time t(|x|)

20-a

slide-50
SLIDE 50

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Let M be a Σk+1 machine running in time t Recall M(x) can be characterized by a first-order sentence:

(∃x1, |x1| ≤ t(|x|))(∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|))[P(x, x1, x2, . . . , xk+1)]

where P “runs” in time t(|x|) Important Point: input to P is of O(t(|x|)) length, so P actually runs in linear time with respect to the length of its input

20-b

slide-51
SLIDE 51

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Define

R(x, x1) := (∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|)) [P(x, x1, x2, . . . , xk+1)]

21

slide-52
SLIDE 52

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Define

R(x, x1) := (∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|)) [P(x, x1, x2, . . . , xk+1)]

So M(x) accepts iff (∃x1, |x1| ≤ t(|x|))R(x, x1)

21-a

slide-53
SLIDE 53

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Define

R(x, x1) := (∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|)) [P(x, x1, x2, . . . , xk+1)]

So M(x) accepts iff (∃x1, |x1| ≤ t(|x|))R(x, x1)

  • By definition, R recognized by a Πk machine in time t(|x|),

i.e. linear time (|x1| = t(|x|)).

21-b

slide-54
SLIDE 54

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Define

R(x, x1) := (∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|)) [P(x, x1, x2, . . . , xk+1)]

So M(x) accepts iff (∃x1, |x1| ≤ t(|x|))R(x, x1)

  • By definition, R recognized by a Πk machine in time t(|x|),

i.e. linear time (|x1| = t(|x|)).

  • By assumption, there is R′ equivalent to R that starts with an ∃, has k

quantifier blocks, is in t(|x|)c time

21-c

slide-55
SLIDE 55

A Slow-Down Lemma: Proof

Assume ΣkTIME[n] ⊆ ΠkTIME[nc] Define

R(x, x1) := (∀x2, |x2| ≤ t(|x|)) · · · (Qz, |xk+1| ≤ t(|x|)) [P(x, x1, x2, . . . , xk+1)]

So M(x) accepts iff (∃x1, |x1| ≤ t(|x|))R(x, x1)

  • By definition, R recognized by a Πk machine in time t(|x|),

i.e. linear time (|x1| = t(|x|)).

  • By assumption, there is R′ equivalent to R that starts with an ∃, has k

quantifier blocks, is in t(|x|)c time

M(x) accepts iff [(∃x1, |x1| ≤ t(|x|))R′(x, x1)] ← − ΣkTIME[tc]

21-d

slide-56
SLIDE 56

Outline

  • Preliminaries
  • A Speed-Up Theorem
  • A Slow-Down Lemma
  • Lipton and Viglas’ n

√ 2 Lower Bound

  • Our Inductive Argument
  • From n1.66 to n1.732

22

slide-57
SLIDE 57

Lipton and Viglas’ n

√ 2 Lower Bound (Rephrased) Lemma: If NTIME[n] ⊆ DTISP[nc, no(1)] for some c ≥ 1, then for all polynomials t(n) ≥ n, NTIME[t] ⊆ DTISP[tc, to(1)] Theorem: NTIME[n] DTISP[n

√ 2−ε, no(1)]

23

slide-58
SLIDE 58

Lipton and Viglas’ n

√ 2 Lower Bound (Rephrased) Lemma: If NTIME[n] ⊆ DTISP[nc, no(1)] for some c ≥ 1, then for all polynomials t(n) ≥ n, NTIME[t] ⊆ DTISP[tc, to(1)] Theorem: NTIME[n] DTISP[n

√ 2−ε, no(1)]

Proof: Assume NTIME[n] ⊆ DTISP[nc, no(1)] (We will find a c that implies a contradiction)

23-a

slide-59
SLIDE 59

Lipton and Viglas’ n

√ 2 Lower Bound (Rephrased) Lemma: If NTIME[n] ⊆ DTISP[nc, no(1)] for some c ≥ 1, then for all polynomials t(n) ≥ n, NTIME[t] ⊆ DTISP[tc, to(1)] Theorem: NTIME[n] DTISP[n

√ 2−ε, no(1)]

Proof: Assume NTIME[n] ⊆ DTISP[nc, no(1)] (We will find a c that implies a contradiction)

  • Σ2TIME[n] ⊆ NTIME[nc], by slow-down theorem

23-b

slide-60
SLIDE 60

Lipton and Viglas’ n

√ 2 Lower Bound (Rephrased) Lemma: If NTIME[n] ⊆ DTISP[nc, no(1)] for some c ≥ 1, then for all polynomials t(n) ≥ n, NTIME[t] ⊆ DTISP[tc, to(1)] Theorem: NTIME[n] DTISP[n

√ 2−ε, no(1)]

Proof: Assume NTIME[n] ⊆ DTISP[nc, no(1)] (We will find a c that implies a contradiction)

  • Σ2TIME[n] ⊆ NTIME[nc], by slow-down theorem
  • NTIME[nc] ⊆ DTISP[nc2, no(1)], by assumption and padding

23-c

slide-61
SLIDE 61

Lipton and Viglas’ n

√ 2 Lower Bound (Rephrased) Lemma: If NTIME[n] ⊆ DTISP[nc, no(1)] for some c ≥ 1, then for all polynomials t(n) ≥ n, NTIME[t] ⊆ DTISP[tc, to(1)] Theorem: NTIME[n] DTISP[n

√ 2−ε, no(1)]

Proof: Assume NTIME[n] ⊆ DTISP[nc, no(1)] (We will find a c that implies a contradiction)

  • Σ2TIME[n] ⊆ NTIME[nc], by slow-down theorem
  • NTIME[nc] ⊆ DTISP[nc2, no(1)], by assumption and padding
  • DTISP[nc2, no(1)] ⊆ Π2TIME[nc2/2], by speed-up theorem, so

c < √ 2 contradicts the hierarchy theorem

  • 23-d
slide-62
SLIDE 62

Outline

  • Preliminaries
  • A Speed-Up Theorem
  • A Slow-Down Lemma
  • Lipton and Viglas’ n

√ 2 Lower Bound

  • Our Inductive Argument
  • From n1.66 to n1.732

24

slide-63
SLIDE 63

Viewing Lipton-Viglas as a Lemma (The Base Case for Our Induction)

We deliberately presented Lipton-Viglas’s result differently from the

  • riginal argument. In this way, we get

Lemma: NTIME[n] ⊆ DTISP[nc, no(1)] implies

Σ2TIME[n] ⊆ Π2TIME[nc2/2].

Note if c < 2 then c2/2 < c.

25

slide-64
SLIDE 64

Viewing Lipton-Viglas as a Lemma (The Base Case for Our Induction)

We deliberately presented Lipton-Viglas’s result differently from the

  • riginal argument. In this way, we get

Lemma: NTIME[n] ⊆ DTISP[nc, no(1)] implies

Σ2TIME[n] ⊆ Π2TIME[nc2/2].

Note if c < 2 then c2/2 < c.

  • Thus, we may not necessarily have a contradiction for larger c, but we

can remove one alternation from Σ3 with only nc2/2 cost

25-a

slide-65
SLIDE 65

Viewing Lipton-Viglas as a Lemma (The Base Case for Our Induction)

We deliberately presented Lipton-Viglas’s result differently from the

  • riginal argument. In this way, we get

Lemma: NTIME[n] ⊆ DTISP[nc, no(1)] implies

Σ2TIME[n] ⊆ Π2TIME[nc2/2].

Note if c < 2 then c2/2 < c.

  • Thus, we may not necessarily have a contradiction for larger c, but we

can remove one alternation from Σ3 with only nc2/2 cost

  • Slow-down theorem implies Σ3TIME[n] ⊆ Σ2TIME[nc2/2]

25-b

slide-66
SLIDE 66

The Start of the Induction: Σ3

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and the lemma

26

slide-67
SLIDE 67

The Start of the Induction: Σ3

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and the lemma

  • Σ3TIME[n] ⊆ Σ2TIME[nc2/2], by slow-down and lemma

26-a

slide-68
SLIDE 68

The Start of the Induction: Σ3

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and the lemma

  • Σ3TIME[n] ⊆ Σ2TIME[nc2/2], by slow-down and lemma
  • Σ2TIME[nc2/2] ⊆ NTIME[nc3/2], by slow-down

26-b

slide-69
SLIDE 69

The Start of the Induction: Σ3

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and the lemma

  • Σ3TIME[n] ⊆ Σ2TIME[nc2/2], by slow-down and lemma
  • Σ2TIME[nc2/2] ⊆ NTIME[nc3/2], by slow-down
  • NTIME[nc3/2] ⊆ DTISP[nc4/2, no(1)], by assumption and padding

26-c

slide-70
SLIDE 70

The Start of the Induction: Σ3

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and the lemma

  • Σ3TIME[n] ⊆ Σ2TIME[nc2/2], by slow-down and lemma
  • Σ2TIME[nc2/2] ⊆ NTIME[nc3/2], by slow-down
  • NTIME[nc3/2] ⊆ DTISP[nc4/2, no(1)], by assumption and padding
  • DTISP[nc4/2, no(1)] ⊆ Π3TIME[nc4/6], by speed-up

26-d

slide-71
SLIDE 71

The Start of the Induction: Σ3

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and the lemma

  • Σ3TIME[n] ⊆ Σ2TIME[nc2/2], by slow-down and lemma
  • Σ2TIME[nc2/2] ⊆ NTIME[nc3/2], by slow-down
  • NTIME[nc3/2] ⊆ DTISP[nc4/2, no(1)], by assumption and padding
  • DTISP[nc4/2, no(1)] ⊆ Π3TIME[nc4/6], by speed-up

Observe:

  • Now c <

4

√ 6 ≈ 1.565 contradicts time hierarchy for Σ3 and Π3

  • But if c ≥

4

√ 6, then we obtain a new “lemma”: Σ3TIME[n] ⊆ Π3TIME[nc4/6]

26-e

slide-72
SLIDE 72

Σ4 , Σ5 , . . .

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and lemmas

27

slide-73
SLIDE 73

Σ4 , Σ5 , . . .

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and lemmas (Here we drop the TIME from ΣkTIME for tidiness)

Σ4[n] ⊆ Σ3[n

c4 6 ] ⊆ Σ2[n c4 6 · c2 2 ] ⊆ NTIME[n c4 6 · c2 2 ·c], but

27-a

slide-74
SLIDE 74

Σ4 , Σ5 , . . .

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and lemmas (Here we drop the TIME from ΣkTIME for tidiness)

Σ4[n] ⊆ Σ3[n

c4 6 ] ⊆ Σ2[n c4 6 · c2 2 ] ⊆ NTIME[n c4 6 · c2 2 ·c], but

NTIME[n

c4 6 · c2 2 ·c] ⊆ DTISP[n c4 6 · c2 2 ·c2, no(1)] ⊆ Π4[nc8/48]

(c <

8

√ 48 ≈ 1.622 implies contradiction)

27-b

slide-75
SLIDE 75

Σ4 , Σ5 , . . .

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and lemmas (Here we drop the TIME from ΣkTIME for tidiness)

Σ4[n] ⊆ Σ3[n

c4 6 ] ⊆ Σ2[n c4 6 · c2 2 ] ⊆ NTIME[n c4 6 · c2 2 ·c], but

NTIME[n

c4 6 · c2 2 ·c] ⊆ DTISP[n c4 6 · c2 2 ·c2, no(1)] ⊆ Π4[nc8/48]

(c <

8

√ 48 ≈ 1.622 implies contradiction) Σ5[n] ⊆ Σ4[n

c8 48] ⊆ Σ3[n c12 48·6] ⊆ Σ2[n c14 48·6·2], and this is in

27-c

slide-76
SLIDE 76

Σ4 , Σ5 , . . .

Assume NTIME[n] ⊆ DTISP[nc, no(1)] and lemmas (Here we drop the TIME from ΣkTIME for tidiness)

Σ4[n] ⊆ Σ3[n

c4 6 ] ⊆ Σ2[n c4 6 · c2 2 ] ⊆ NTIME[n c4 6 · c2 2 ·c], but

NTIME[n

c4 6 · c2 2 ·c] ⊆ DTISP[n c4 6 · c2 2 ·c2, no(1)] ⊆ Π4[nc8/48]

(c <

8

√ 48 ≈ 1.622 implies contradiction) Σ5[n] ⊆ Σ4[n

c8 48] ⊆ Σ3[n c12 48·6] ⊆ Σ2[n c14 48·6·2], and this is in

NTIME[n

c15 48·12] ⊆ DTISP[n c16 48·12, no(1)] ⊆ Π5[n c16 48·60]

(c <

16

√ 2880 ≈ 1.645 implies contradiction)

27-d

slide-77
SLIDE 77

An intermediate lower bound, nΥ′

Assume NTIME[n] ⊆ DTISP[nc, no(1)] Claim: The inductive process of the previous slide converges. The constant derived is

Υ′ := lim

k→∞ f(k),

where f(k) := k−1

j=1(1 + 1/j)1/2j.

Note Υ′ ≈ 1.66.

28

slide-78
SLIDE 78

A Time-Space Tradeoff Corollary: For every c < 1.66 there is d > 0 such that SAT is not in DTISP[nc, nd].

29

slide-79
SLIDE 79

Outline

  • Preliminaries
  • A Speed-Up Theorem
  • A Slow-Down Lemma
  • Lipton and Viglas’ n

√ 2 Lower Bound

  • Our Inductive Argument
  • From n1.66 to n1.732

30

slide-80
SLIDE 80

From n1.66 to n1.732 DTISP[t, to(1)] ⊆ ΠkTISP[t1/k+o(1)] is an unconditional result

31

slide-81
SLIDE 81

From n1.66 to n1.732 DTISP[t, to(1)] ⊆ ΠkTISP[t1/k+o(1)] is an unconditional result

All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] ⊆ DTISP[nc, no(1)].

31-a

slide-82
SLIDE 82

From n1.66 to n1.732 DTISP[t, to(1)] ⊆ ΠkTISP[t1/k+o(1)] is an unconditional result

All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] ⊆ DTISP[nc, no(1)]. We’ll now show how such an assumption can get

DTISP[nc, no(1)] ⊆ ΠkTISP[nc/(k+ε)+o(1)]

for some ε > 0. This will push the lower bound higher.

31-b

slide-83
SLIDE 83

From n1.66 to n1.732 DTISP[t, to(1)] ⊆ ΠkTISP[t1/k+o(1)] is an unconditional result

All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] ⊆ DTISP[nc, no(1)]. We’ll now show how such an assumption can get

DTISP[nc, no(1)] ⊆ ΠkTISP[nc/(k+ε)+o(1)]

for some ε > 0. This will push the lower bound higher. Lemma: Let c ≤ 2. Define d1 := 2, dk := 1 + dk−1

c

. If NTIME[n2/c] ⊆ DTISP[n2, no(1)], then for all k, DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)].

31-c

slide-84
SLIDE 84

From n1.66 to n1.732 DTISP[t, to(1)] ⊆ ΠkTISP[t1/k+o(1)] is an unconditional result

All other derived class inclusions in the above proof actually depend on the assumption that NTIME[n] ⊆ DTISP[nc, no(1)]. We’ll now show how such an assumption can get

DTISP[nc, no(1)] ⊆ ΠkTISP[nc/(k+ε)+o(1)]

for some ε > 0. This will push the lower bound higher. Lemma: Let c ≤ 2. Define d1 := 2, dk := 1 + dk−1

c

. If NTIME[n2/c] ⊆ DTISP[n2, no(1)], then for all k, DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)]. For c < 2, {dk} is increasing – for each k, a bit more of

DTISP[nO(1), no(1)] is shown to be contained in Π2TIME[n1+o(1)]

31-d

slide-85
SLIDE 85

Proof of Lemma

Lemma: Let c < 2. Define d1 := 2, dk := 1 + dk−1

c

. If NTIME[n2/c] ⊆ DTISP[n2, no(1)], then for all k ∈ N, DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)]. Induction on k. k = 1 case is trivial (speedup theorem). Suppose NTIME[n2/c] ⊆ DTISP[n2, no(1)] and

DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)].

32

slide-86
SLIDE 86

Proof of Lemma

Lemma: Let c < 2. Define d1 := 2, dk := 1 + dk−1

c

. If NTIME[n2/c] ⊆ DTISP[n2, no(1)], then for all k ∈ N, DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)]. Induction on k. k = 1 case is trivial (speedup theorem). Suppose NTIME[n2/c] ⊆ DTISP[n2, no(1)] and

DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)].

Want: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)].

32-a

slide-87
SLIDE 87

Proof of Lemma

Lemma: Let c < 2. Define d1 := 2, dk := 1 + dk−1

c

. If NTIME[n2/c] ⊆ DTISP[n2, no(1)], then for all k ∈ N, DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)]. Induction on k. k = 1 case is trivial (speedup theorem). Suppose NTIME[n2/c] ⊆ DTISP[n2, no(1)] and

DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)].

Want: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]. By padding, the purple assumptions imply

NTIME[ndk/c] ⊆ DTISP[ndk, no(1)] ⊆ Π2TIME[n1+o(1)]. (∗)

32-b

slide-88
SLIDE 88

Goal: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]

Consider a Π2 simulation of DTISP[n1+dk/c, no(1)] with only O(n) bits (n1−o(1) configurations) in the universal quantifier:

33

slide-89
SLIDE 89

Goal: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]

Consider a Π2 simulation of DTISP[n1+dk/c, no(1)] with only O(n) bits (n1−o(1) configurations) in the universal quantifier:

(∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∃i ∈ {1, . . . , n1−o(1) − 1})[Ci does not lead to Ci+1 in ndk/c+o(1) time]

33-a

slide-90
SLIDE 90

Goal: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]

Consider a Π2 simulation of DTISP[n1+dk/c, no(1)] with only O(n) bits (n1−o(1) configurations) in the universal quantifier:

(∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∃i ∈ {1, . . . , n1−o(1) − 1})[Ci does not lead to Ci+1 in ndk/c+o(1) time]

Green part is an NTIME computation, input of length O(n), takes

ndk/c+o(1) time

33-b

slide-91
SLIDE 91

Goal: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]

Consider a Π2 simulation of DTISP[n1+dk/c, no(1)] with only O(n) bits (n1−o(1) configurations) in the universal quantifier:

(∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∃i ∈ {1, . . . , n1−o(1) − 1})[Ci does not lead to Ci+1 in ndk/c+o(1) time]

Green part is an NTIME computation, input of length O(n), takes

ndk/c+o(1) time (∗) = ⇒ Green can be replaced with Π2TIME[n1+o(1)] computation, i.e.

33-c

slide-92
SLIDE 92

Goal: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]

Consider a Π2 simulation of DTISP[n1+dk/c, no(1)] with only O(n) bits (n1−o(1) configurations) in the universal quantifier:

(∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∃i ∈ {1, . . . , n1−o(1) − 1})[Ci does not lead to Ci+1 in ndk/c+o(1) time]

Green part is an NTIME computation, input of length O(n), takes

ndk/c+o(1) time (∗) = ⇒ Green can be replaced with Π2TIME[n1+o(1)] computation, i.e. (∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∀y, |y| = c|x|1+o(1)) (∃z, |z| = c|z|1+o(1))[R(C1, . . . , Cn1−o(1), x, y, z)],

for some deterministic linear time relation R and constant c > 0.

33-d

slide-93
SLIDE 93

Goal: DTISP[n1+dk/c, no(1)] ⊆ Π2TIME[n1+o(1)]

Consider a Π2 simulation of DTISP[n1+dk/c, no(1)] with only O(n) bits (n1−o(1) configurations) in the universal quantifier:

(∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∃i ∈ {1, . . . , n1−o(1) − 1})[Ci does not lead to Ci+1 in ndk/c+o(1) time]

Green part is an NTIME computation, input of length O(n), takes

ndk/c+o(1) time (∗) = ⇒ Green can be replaced with Π2TIME[n1+o(1)] computation, i.e. (∀ configurations C1, . . . , Cn1−o(1) of M on x s.t. Cn1−o(1) is rejecting) (∀y, |y| = c|x|1+o(1)) (∃z, |z| = c|z|1+o(1))[R(C1, . . . , Cn1−o(1), x, y, z)],

for some deterministic linear time relation R and constant c > 0. Therefore, DTISP[ndk+1, no(1)] ⊆ Π2TIME[n1+o(1)].

  • 33-e
slide-94
SLIDE 94

New Lemma Gives Better Bound

Corollary 1 Let c ∈ (1, 2). If NTIME[n2/c] ⊆ DTISP[n2, no(1)] then for all ε > 0 such that

c c−1 − ε ≥ 1,

DTISP[n

c c−1 −ε, no(1)] ⊆ Π2TIME[n1+o(1)].

34

slide-95
SLIDE 95

New Lemma Gives Better Bound

Corollary 1 Let c ∈ (1, 2). If NTIME[n2/c] ⊆ DTISP[n2, no(1)] then for all ε > 0 such that

c c−1 − ε ≥ 1,

DTISP[n

c c−1 −ε, no(1)] ⊆ Π2TIME[n1+o(1)].

  • Proof. Recall d2 = 2, dk = 1 + dk−1/c.

{dk} is monotone non-decreasing for c < 2; converges to d∞ = 1 + d∞

c

= ⇒ d∞ = c/(c − 1). (Note c = 2 implies d∞ = 2)

It follows that for all ε, there’s a K such that dK ≥

c c−1 − ε.

  • 34-a
slide-96
SLIDE 96

Now: Apply inductive method from n1.66 lower bound– the “base case” now resembles Fortnow-Van Melkebeek’s nφ lower bound If NTIME[n] ⊆ DTISP[nc, no(1)], Corollary 1 implies

Σ2TIME[n] ⊆ DTISP[nc2, no(1)] ⊆ DTISP[

  • nc2· c−1

c

c/(c−1)+o(1) , no(1)] ⊆ Π2TIME[nc·(c−1)+o(1)]. φ(φ − 1) = 1

35

slide-97
SLIDE 97

Now: Apply inductive method from n1.66 lower bound– the “base case” now resembles Fortnow-Van Melkebeek’s nφ lower bound If NTIME[n] ⊆ DTISP[nc, no(1)], Corollary 1 implies

Σ2TIME[n] ⊆ DTISP[nc2, no(1)] ⊆ DTISP[

  • nc2· c−1

c

c/(c−1)+o(1) , no(1)] ⊆ Π2TIME[nc·(c−1)+o(1)]. φ(φ − 1) = 1

Inducting as before, we get

35-a

slide-98
SLIDE 98

Now: Apply inductive method from n1.66 lower bound– the “base case” now resembles Fortnow-Van Melkebeek’s nφ lower bound If NTIME[n] ⊆ DTISP[nc, no(1)], Corollary 1 implies

Σ2TIME[n] ⊆ DTISP[nc2, no(1)] ⊆ DTISP[

  • nc2· c−1

c

c/(c−1)+o(1) , no(1)] ⊆ Π2TIME[nc·(c−1)+o(1)]. φ(φ − 1) = 1

Inducting as before, we get

Σ3[n] ⊆ Σ2[nc·(c−1)] ⊆ DTISP[nc3·(c−1), no(1)] ⊆ Π3[n

c3·(c−1) 3

], then

35-b

slide-99
SLIDE 99

Now: Apply inductive method from n1.66 lower bound– the “base case” now resembles Fortnow-Van Melkebeek’s nφ lower bound If NTIME[n] ⊆ DTISP[nc, no(1)], Corollary 1 implies

Σ2TIME[n] ⊆ DTISP[nc2, no(1)] ⊆ DTISP[

  • nc2· c−1

c

c/(c−1)+o(1) , no(1)] ⊆ Π2TIME[nc·(c−1)+o(1)]. φ(φ − 1) = 1

Inducting as before, we get

Σ3[n] ⊆ Σ2[nc·(c−1)] ⊆ DTISP[nc3·(c−1), no(1)] ⊆ Π3[n

c3·(c−1) 3

], then Σ4[n] ⊆ Σ3[n

c3·(c−1) 3

] ⊆ Σ2[n

c4·(c−1)2 3

] ⊆ DTISP[n

c6·(c−1)2 3

, no(1)] ⊆ Π4[n

c6·(c−1)2 12

], etc.

35-c

slide-100
SLIDE 100

Claim: The exponent ek derived for ΣkTIME[n] ⊆ ΠkTIME[nek] is

ek =

c3·2k−3(c−1)2k−3 k·(32k−4·42k−5·52k−6···(k−1)).

36

slide-101
SLIDE 101

Finishing up

Simplifying, ek =

c3·2k−3(c−1)2k−3 k·(32k−4·42k−5·52k−6···(k−1)) =

  • c3(c−1)

k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3)

2k−3

thus

ek < 1 ⇐ ⇒

c3(c−1) k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3) < 1

37

slide-102
SLIDE 102

Finishing up

Simplifying, ek =

c3·2k−3(c−1)2k−3 k·(32k−4·42k−5·52k−6···(k−1)) =

  • c3(c−1)

k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3)

2k−3

thus

ek < 1 ⇐ ⇒

c3(c−1) k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3) < 1

  • Define f(k) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3)

37-a

slide-103
SLIDE 103

Finishing up

Simplifying, ek =

c3·2k−3(c−1)2k−3 k·(32k−4·42k−5·52k−6···(k−1)) =

  • c3(c−1)

k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3)

2k−3

thus

ek < 1 ⇐ ⇒

c3(c−1) k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3) < 1

  • Define f(k) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3)
  • f(k) → 3.81213 · · · as k → ∞

37-b

slide-104
SLIDE 104

Finishing up

Simplifying, ek =

c3·2k−3(c−1)2k−3 k·(32k−4·42k−5·52k−6···(k−1)) =

  • c3(c−1)

k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3)

2k−3

thus

ek < 1 ⇐ ⇒

c3(c−1) k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3) < 1

  • Define f(k) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3)
  • f(k) → 3.81213 · · · as k → ∞
  • Above task reduces to finding positive root of

c3 · (c − 1) = 3.81213

37-c

slide-105
SLIDE 105

Finishing up

Simplifying, ek =

c3·2k−3(c−1)2k−3 k·(32k−4·42k−5·52k−6···(k−1)) =

  • c3(c−1)

k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3)

2k−3

thus

ek < 1 ⇐ ⇒

c3(c−1) k2−k+3·(32−1·42−2·52−3···(k−1)2−k+3) < 1

  • Define f(k) = k2−k+3 · (32−1 · 42−2 · 52−3 · · · (k − 1)2−k+3)
  • f(k) → 3.81213 · · · as k → ∞
  • Above task reduces to finding positive root of

c3 · (c − 1) = 3.81213 = ⇒ c ≈ 1.7327 > √ 3 +

6 10000 yields a contradiction.

37-d

slide-106
SLIDE 106

The above inductive method can be applied to improve several existing lower bound arguments.

  • Time lower bounds for SAT on off-line one-tape machines
  • Time-space tradeoffs for

nondeterminism/co-nondeterminism in RAM model

  • Etc. See the paper!

38