Meta-Complexity Theorems for Bottom-up Logic Programs Harald - - PowerPoint PPT Presentation

meta complexity theorems for bottom up logic programs
SMART_READER_LITE
LIVE PREVIEW

Meta-Complexity Theorems for Bottom-up Logic Programs Harald - - PowerPoint PPT Presentation

Meta-Complexity Theorems for Bottom-up Logic Programs Harald Ganzinger Max-Planck-Institut f ur Informatik David McAllester ATT Bell-Labs Research Introduction 2 logic programming of efficient algorithms complexity analysis through


slide-1
SLIDE 1

Meta-Complexity Theorems for Bottom-up Logic Programs

Harald Ganzinger Max-Planck-Institut f¨ ur Informatik David McAllester ATT Bell-Labs Research

slide-2
SLIDE 2

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-3
SLIDE 3

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-4
SLIDE 4

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-5
SLIDE 5

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-6
SLIDE 6

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-7
SLIDE 7

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-8
SLIDE 8

Introduction

2

  • logic programming of efficient algorithms
  • complexity analysis through general meta-complexity

theorems

  • guaranteed execution time
  • logical aspects of fundamental algorithmic paradigms

(dynamic programming, union-find, congruence closure, priority queues)

  • application to program analysis:

type inference system = algorithm

  • recent papers:

McAllester [SAS99], Ganzinger/McAllester [IJCAR01]

  • related work: efficient fixpoint iteration by Nielson/Seidl

[2001]

slide-9
SLIDE 9

Contents

3

1st meta-complexity theorem Language: bottom-up logic programs Algorithmic ingredients: dynamic programming, indexing Examples: (interprocedural) reachability 2nd meta-complexity theorem Language: logic programs with deletion and priorities Logical basis: saturation up to redundancy Examples: union-find, congruence closure, Henglein’s subtype analysis 3rd meta-complexity theorem Language: logic programs with deletion and instance priorities Algorithmic ingredients: priority queues Examples: shortest paths, minimal spanning trees

slide-10
SLIDE 10

Contents

3

1st meta-complexity theorem Language: bottom-up logic programs Algorithmic ingredients: dynamic programming, indexing Examples: (interprocedural) reachability 2nd meta-complexity theorem Language: logic programs with deletion and priorities Logical basis: saturation up to redundancy Examples: union-find, congruence closure, Henglein’s subtype analysis 3rd meta-complexity theorem Language: logic programs with deletion and instance priorities Algorithmic ingredients: priority queues Examples: shortest paths, minimal spanning trees

slide-11
SLIDE 11

Contents

3

1st meta-complexity theorem Language: bottom-up logic programs Algorithmic ingredients: dynamic programming, indexing Examples: (interprocedural) reachability 2nd meta-complexity theorem Language: logic programs with deletion and priorities Logical basis: saturation up to redundancy Examples: union-find, congruence closure, Henglein’s subtype analysis 3rd meta-complexity theorem Language: logic programs with deletion and instance priorities Algorithmic ingredients: priority queues Examples: shortest paths, minimal spanning trees

slide-12
SLIDE 12

Paradigm

4

database of facts D inference system R closure R∗(D) this talk

slide-13
SLIDE 13

Paradigm

4

input

pre-processor

database of facts D inference system R closure R∗(D)

post-processor

  • utput

this talk

slide-14
SLIDE 14

Paradigm

4

input

pre-processor

database of facts D inference system R closure R∗(D)

post-processor

  • utput

⇐ = Paige, Yang 1997

slide-15
SLIDE 15

Reachability in Graphs

5

Database: D = {e(u, v) | (u, v) ∈ E} ∪ {s(u) | u a source node} Inference system: s(u) r(u) r(u) e(u, v) r(v) Clause notation: s(u) ⊃ r(u) r(u), e(u, v) ⊃ r(v) Closure: R∗(D) = D ∪ {r(u) | u reachable from a source}

slide-16
SLIDE 16

Reachability in Graphs

5

Database: D = {e(u, v) | (u, v) ∈ E} ∪ {s(u) | u a source node} Inference system: s(u) r(u) r(u) e(u, v) r(v) Clause notation: s(u) ⊃ r(u) r(u), e(u, v) ⊃ r(v) Closure: R∗(D) = D ∪ {r(u) | u reachable from a source}

slide-17
SLIDE 17

Reachability in Graphs

5

Database: D = {e(u, v) | (u, v) ∈ E} ∪ {s(u) | u a source node} Inference system: s(u) r(u) r(u) e(u, v) r(v) Clause notation: s(u) ⊃ r(u) r(u), e(u, v) ⊃ r(v) Closure: R∗(D) = D ∪ {r(u) | u reachable from a source}

slide-18
SLIDE 18

Reachability in Graphs

5

Database: D = {e(u, v) | (u, v) ∈ E} ∪ {s(u) | u a source node} Inference system: s(u) r(u) r(u) e(u, v) r(v) Clause notation: s(u) ⊃ r(u) r(u), e(u, v) ⊃ r(v) Closure: R∗(D) = D ∪ {r(u) | u reachable from a source}

slide-19
SLIDE 19

Example

6

1 2 3 4

slide-20
SLIDE 20

Example

6

1 2 3 4 Database s(1), e(1, 3), e(1, 4), e(2, 3), e(3, 4), e(4, 3)

slide-21
SLIDE 21

Example

6

1 2 3 4 Database s(1), e(1, 3), e(1, 4), e(2, 3), e(3, 4), e(4, 3), r(1)

slide-22
SLIDE 22

Example

6

1 2 3 4 Database s(1), e(1, 3), e(1, 4), e(2, 3), e(3, 4), e(4, 3), r(1), r(3)

slide-23
SLIDE 23

Example

6

1 2 3 4 Database s(1), e(1, 3), e(1, 4), e(2, 3), e(3, 4), e(4, 3), r(1), r(3), r(4)

slide-24
SLIDE 24

Example

6

1 2 3 4 Database s(1), e(1, 3), e(1, 4), e(2, 3), e(3, 4), e(4, 3), r(1), r(3), r(4) ⇒ saturated.

slide-25
SLIDE 25

First Meta-Complexity Theorem

7

Bottom-up computation: match prefixes of antecedents against database and fire conclusions

slide-26
SLIDE 26

First Meta-Complexity Theorem

7

Bottom-up computation: match prefixes of antecedents against database and fire conclusions prefix firings: πR(D) = | {(rσ, i) | r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R Ajσ ∈ D, for 1 ≤ j ≤ i} |

slide-27
SLIDE 27

First Meta-Complexity Theorem

7

Bottom-up computation: match prefixes of antecedents against database and fire conclusions prefix firings: πR(D) = | {(rσ, i) | r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R Ajσ ∈ D, for 1 ≤ j ≤ i} | Theorem [McAllester 1999] Let R be an inference system such that R∗(D) is finite. Then R∗(D) can be computed in time O(| |D| | + πR(R∗(D))).

slide-28
SLIDE 28

First Meta-Complexity Theorem

7

Bottom-up computation: match prefixes of antecedents against database and fire conclusions prefix firings: πR(D) = | {(rσ, i) | r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R Ajσ ∈ D, for 1 ≤ j ≤ i} | Theorem [McAllester 1999] Let R be an inference system such that R∗(D) is finite. Then R∗(D) can be computed in time O(| |D| | + πR(R∗(D))). Corollary [Dowling, Gallier 1984] If R is ground, R∗(D) can be computed in time O(| |D| | + | |R| |).

slide-29
SLIDE 29

First Meta-Complexity Theorem

7

Bottom-up computation: match prefixes of antecedents against database and fire conclusions prefix firings: πR(D) = | {(rσ, i) | r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R Ajσ ∈ D, for 1 ≤ j ≤ i} | Theorem [McAllester 1999] Let R be an inference system such that R∗(D) is finite. Then R∗(D) can be computed in time O(| |D| | + πR(R∗(D))). Corollary [Dowling, Gallier 1984] If R is ground, R∗(D) can be computed in time O(| |D| | + | |R| |). Extension: constraints for which each solution can be computed in time O(1)

slide-30
SLIDE 30

Reachability in Graphs

8

r(u) s(u) e(u, v) r(u) r(v)

slide-31
SLIDE 31

Reachability in Graphs

8

r(u) O(|V |) s(u) O(|V |) e(u, v) r(u) r(v)

slide-32
SLIDE 32

Reachability in Graphs

8

r(u) O(|V |) s(u) O(|V |) e(u, v) + O(|E|) r(u) r(v) Theorem Reachability can be decided in linear time.

slide-33
SLIDE 33

Interprocedural Reachability: Database

9

program facts

1 procedure main 2 begin 3 declare x: int 4 read(x) 5 call p(x) 6 end 7 procedure p(a:int) 8 begin 9 if a>0 then 10 read(g) 11 a:=a-g 12 call p(a) 13 print(a) 14 fi 15 end

proc(main,2,6) next(main,2,5) call(main,p,5,6) proc(p,8,15) next(p,8,12) call(p,p,12,13) next(p,13,15) next(p,8,15)

slide-34
SLIDE 34

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) P ⇒ BP next(Q, L, L′) Q ⇒ L Q ⇒ L′ call(Q, P, Lc, Rr) proc(P, BP , EP ) P ⇒ EP Q ⇒ Lc Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-35
SLIDE 35

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) Q ⇒ L Q ⇒ L′ call(Q, P, Lc, Rr) proc(P, BP , EP ) P ⇒ EP Q ⇒ Lc Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-36
SLIDE 36

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) O(n) Q ⇒ L Q ⇒ L′ call(Q, P, Lc, Rr) proc(P, BP , EP ) P ⇒ EP Q ⇒ Lc Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-37
SLIDE 37

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) O(n) Q ⇒ L ∗ O(1) Q ⇒ L′ call(Q, P, Lc, Rr) proc(P, BP , EP ) P ⇒ EP Q ⇒ Lc Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-38
SLIDE 38

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) O(n) Q ⇒ L ∗ O(1) Q ⇒ L′ call(Q, P, Lc, Rr) O(n) proc(P, BP , EP ) P ⇒ EP Q ⇒ Lc Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-39
SLIDE 39

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) O(n) Q ⇒ L ∗ O(1) Q ⇒ L′ call(Q, P, Lc, Rr) O(n) proc(P, BP , EP ) ∗ O(1) P ⇒ EP Q ⇒ Lc Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-40
SLIDE 40

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) O(n) Q ⇒ L ∗ O(1) Q ⇒ L′ call(Q, P, Lc, Rr) O(n) proc(P, BP , EP ) ∗ O(1) P ⇒ EP ∗ O(1) Q ⇒ Lc ∗ O(1) Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-41
SLIDE 41

Interprocedural Reachability: Rules

10

Read “P ⇒ L” as “in procedure P label L can be reached”.

proc(P, BP , EP ) O(n) P ⇒ BP next(Q, L, L′) O(n) Q ⇒ L ∗ O(1) Q ⇒ L′ call(Q, P, Lc, Rr) O(n) proc(P, BP , EP ) ∗ O(1) P ⇒ EP ∗ O(1) Q ⇒ Lc ∗ O(1) Q ⇒ Lr

Theorem IPR∗(D) can be computed in time O(n), with n = | |D| |.

slide-42
SLIDE 42

Proof of the Meta-Complexity Theorem I

11

Assumption: all terms in fully shared form

slide-43
SLIDE 43

Proof of the Meta-Complexity Theorem I

11

Assumption: all terms in fully shared form Matching: in O(1) (for atoms in rules against atoms in D)

slide-44
SLIDE 44

Proof of the Meta-Complexity Theorem I

11

Assumption: all terms in fully shared form Matching: in O(1) (for atoms in rules against atoms in D) Unary Rules A ⊃ B: matching of A against each atom in R(D), plus construction of B, costs total time O(|R(D)|)

slide-45
SLIDE 45

Proof of the Meta-Complexity Theorem I

11

Assumption: all terms in fully shared form Matching: in O(1) (for atoms in rules against atoms in D) Unary Rules A ⊃ B: matching of A against each atom in R(D), plus construction of B, costs total time O(|R(D)|) Note: programs not cons-free

slide-46
SLIDE 46

Proof of the Meta-Complexity Theorem I

11

Assumption: all terms in fully shared form Matching: in O(1) (for atoms in rules against atoms in D) Unary Rules A ⊃ B: matching of A against each atom in R(D), plus construction of B, costs total time O(|R(D)|) Note: programs not cons-free Problem: avoiding O(|R(D)|k) for rules of length k

slide-47
SLIDE 47

Proof of the Meta-Complexity Theorem II

12

Data structure for rules ρ of the form p(X, Y ) ∧ q(Y, Z) ⊃ r(X, Y, Z)

slide-48
SLIDE 48

Proof of the Meta-Complexity Theorem II

12

Data structure for rules ρ of the form p(X, Y ) ∧ q(Y, Z) ⊃ r(X, Y, Z) ρ[Y ]

p(a,t) p(b,t) p(c,t) p(d,t) p(e,t) q(t,u) q(t,v) q(t,w) q(t,s) p-list of ρ[t] q-list of ρ[t]

slide-49
SLIDE 49

Proof of the Meta-Complexity Theorem II

12

Data structure for rules ρ of the form p(X, Y ) ∧ q(Y, Z) ⊃ r(X, Y, Z) ρ[Y ]

p(a,t) p(b,t) p(c,t) p(d,t) p(e,t) q(t,u) q(t,v) q(t,w) q(t,s) p-list of ρ[t] q-list of ρ[t]

Upon adding a fact p(e, t), fire all r(e, t, z), for z on the q-list of A[t].

slide-50
SLIDE 50

Proof of the Meta-Complexity Theorem II

12

Data structure for rules ρ of the form p(X, Y ) ∧ q(Y, Z) ⊃ r(X, Y, Z) ρ[Y ]

p(a,t) p(b,t) p(c,t) p(d,t) p(e,t) q(t,u) q(t,v) q(t,w) q(t,s) p-list of ρ[t] q-list of ρ[t]

Upon adding a fact p(e, t), fire all r(e, t, z), for z on the q-list of A[t]. The inference system can be transformed (maintaining π) so that it contains only unary rules and binary rules of the form ρ.

slide-51
SLIDE 51

Remarks

13

  • memory consumption often much smaller
slide-52
SLIDE 52

Remarks

13

  • memory consumption often much smaller
  • if R∗(D) infinite, consider R∗(D) ∩ atoms(subterms(D))

⇒ concept of local inference systems (Givan, McAllester 1993)

slide-53
SLIDE 53

Remarks

13

  • memory consumption often much smaller
  • if R∗(D) infinite, consider R∗(D) ∩ atoms(subterms(D))

⇒ concept of local inference systems (Givan, McAllester 1993)

  • in the presence of transitivity laws, complexity is in Ω(n3)
slide-54
SLIDE 54
  • II. Redundancy, Deletion, and Priorities
slide-55
SLIDE 55

Removal of Redundant Information

15

  • redundant information causes inefficiency

D = {. . . , dist(x) ≤ d, dist(x) ≤ d′, d′ < d, . . .} ⇒ delete dist(x) ≤ d

  • Notation: antecedents to be deleted in parenthesis [ . . . ]

. . . , [A], . . . , A′, . . . , [A′′], . . . ⊃ B

  • in the presence of deletion, computations are

nondeterministic: P ⊃ Q [Q] ⊃ S [Q] ⊃ W ⇒ either S or W can be derived, but not both

  • non-determinism don’t-care and/or restricted by priorities
slide-56
SLIDE 56

Removal of Redundant Information

15

  • redundant information causes inefficiency

D = {. . . , dist(x) ≤ d, dist(x) ≤ d′, d′ < d, . . .} ⇒ delete dist(x) ≤ d

  • Notation: antecedents to be deleted in parenthesis [ . . . ]

. . . , [A], . . . , A′, . . . , [A′′], . . . ⊃ B

  • in the presence of deletion, computations are

nondeterministic: P ⊃ Q [Q] ⊃ S [Q] ⊃ W ⇒ either S or W can be derived, but not both

  • non-determinism don’t-care and/or restricted by priorities
slide-57
SLIDE 57

Removal of Redundant Information

15

  • redundant information causes inefficiency

D = {. . . , dist(x) ≤ d, dist(x) ≤ d′, d′ < d, . . .} ⇒ delete dist(x) ≤ d

  • Notation: antecedents to be deleted in parenthesis [ . . . ]

. . . , [A], . . . , A′, . . . , [A′′], . . . ⊃ B

  • in the presence of deletion, computations are

nondeterministic: P ⊃ Q [Q] ⊃ S [Q] ⊃ W ⇒ either S or W can be derived, but not both

  • non-determinism don’t-care and/or restricted by priorities
slide-58
SLIDE 58

Removal of Redundant Information

15

  • redundant information causes inefficiency

D = {. . . , dist(x) ≤ d, dist(x) ≤ d′, d′ < d, . . .} ⇒ delete dist(x) ≤ d

  • Notation: antecedents to be deleted in parenthesis [ . . . ]

. . . , [A], . . . , A′, . . . , [A′′], . . . ⊃ B

  • in the presence of deletion, computations are

nondeterministic: P ⊃ Q [Q] ⊃ S [Q] ⊃ W ⇒ either S or W can be derived, but not both

  • non-determinism don’t-care and/or restricted by priorities
slide-59
SLIDE 59

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-60
SLIDE 60

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-61
SLIDE 61

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-62
SLIDE 62

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-63
SLIDE 63

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-64
SLIDE 64

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-65
SLIDE 65

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-66
SLIDE 66

Logic Programs with Priorities and Deletion

16

  • rules can have antecedents to be deleted after firing
  • priorities assigned to rule schemes
  • computation states S contain positive and negative (deleted)

atoms

  • A visible in S if A ∈ S and ¬A ∈ S (deletions are permanent)
  • Γ ⊃ B applicable in S if

– each atom in Γ is visible in S, and – rule application changes S (by adding B or some ¬A)

  • S visible to a rule if no higher-priority rule is applicable in S
  • computations are maximal sequences of applications of

visible rules

  • the final state of a computation starting with D is called an

(R-) saturation of D

slide-67
SLIDE 67

Second Meta-Complexity Theorem

17

Let C = S0, S1, . . ., ST be a computation. Prefix firing in C: pair (rσ, i) such that for some 0 ≤ t < T: – r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R – St visible to r – Ajσ visible in St, for 1 ≤ j ≤ i

slide-68
SLIDE 68

Second Meta-Complexity Theorem

17

Let C = S0, S1, . . ., ST be a computation. Prefix firing in C: pair (rσ, i) such that for some 0 ≤ t < T: – r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R – St visible to r – Ajσ visible in St, for 1 ≤ j ≤ i Prefix count: πR(D) = max{|p.f.(C)| | C a computation from D}

slide-69
SLIDE 69

Second Meta-Complexity Theorem

17

Let C = S0, S1, . . ., ST be a computation. Prefix firing in C: pair (rσ, i) such that for some 0 ≤ t < T: – r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R – St visible to r – Ajσ visible in St, for 1 ≤ j ≤ i Prefix count: πR(D) = max{|p.f.(C)| | C a computation from D} Theorem [Ganzinger/McAllester 2001] Let R be an inference system such that R(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D)).

slide-70
SLIDE 70

Second Meta-Complexity Theorem

17

Let C = S0, S1, . . ., ST be a computation. Prefix firing in C: pair (rσ, i) such that for some 0 ≤ t < T: – r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R – St visible to r – Ajσ visible in St, for 1 ≤ j ≤ i Prefix count: πR(D) = max{|p.f.(C)| | C a computation from D} Theorem [Ganzinger/McAllester 2001] Let R be an inference system such that R(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D)). Proof as before, but also using constant-length priority queues

slide-71
SLIDE 71

Second Meta-Complexity Theorem

17

Let C = S0, S1, . . ., ST be a computation. Prefix firing in C: pair (rσ, i) such that for some 0 ≤ t < T: – r = A1 ∧ . . . ∧ Ai ∧ . . . ∧ An ⊃ A0 ∈ R – St visible to r – Ajσ visible in St, for 1 ≤ j ≤ i Prefix count: πR(D) = max{|p.f.(C)| | C a computation from D} Theorem [Ganzinger/McAllester 2001] Let R be an inference system such that R(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D)). Proof as before, but also using constant-length priority queues Note: again prefix firings count only once; priorities are for free

slide-72
SLIDE 72

Union-Find

18

find(x) (Refl) x ⇒! x x ⇒! y y ⇒ z (N) x ⇒! z x ⇒ y x ⇒ z (Comm) union(y, z)

slide-73
SLIDE 73

Union-Find

18

find(x) (Refl) x ⇒! x x ⇒! y y ⇒ z (N) x ⇒! z x ⇒ y x ⇒ z (Comm) union(y, z) union(x, y) (Init) find(x), find(y) union(x, y) x ⇒! z1 y ⇒! z2 (Orient) z1 ⇒ z2

We are interested in x . = y defined as ∃z(x ⇒! z ∧ y ⇒! z)

slide-74
SLIDE 74

Union-Find

18

find(x) (Refl) x ⇒! x x ⇒! y O(n2) y ⇒ z ∗ O(n) (N) x ⇒! z x ⇒ y O(n2) x ⇒ z ∗ O(n) (Comm) union(y, z) union(x, y) (Init) find(x), find(y) union(x, y) x ⇒! z1 y ⇒! z2 (Orient) z1 ⇒ z2

Naive Knuth/Bendix completion

slide-75
SLIDE 75

Union-Find

18

find(x) (Refl) x ⇒! x [ [x ⇒! y] ] O(n2) y ⇒ z ∗ O(1) (N) x ⇒! z x ⇒ y O(n) x ⇒ z ∗ O(1) (Comm) union(y, z) union(x, y) (Init) find(x), find(y) [ [union(x, y)] ] x ⇒! z y ⇒! z (Triv) ⊤ [ [union(x, y)] ] x ⇒! z1 y ⇒! z2 (Orient) z1 ⇒ z2

Naive Knuth/Bendix completion + normalization (eager path compression)

slide-76
SLIDE 76

Union-Find

18

find(x) (Refl) x ⇒! x weight(x, 1) [ [x ⇒! y] ] O(n log n) y ⇒ z ∗ O(1) (N) x ⇒! z x ⇒ y x ⇒ z (Comm) union(y, z) union(x, y) (Init) find(x), find(y) [ [union(x, y)] ] x ⇒! z y ⇒! z (Triv) ⊤ [ [union(x, y)] ] x ⇒! z1, weight(z1, w1) y ⇒! z2, [ [weight(z2, w2)] ] w1 ≤ w2 (Orient) z1 ⇒ z2 weight(z2, w1 + w2) + symmetric variant of (Orient)

Naive Knuth/Bendix completion + normalization (eager path compression) + logarithmic merge

slide-77
SLIDE 77

Congruence Closure for Ground Horn Clauses

19

Extension to congruence closure: 7 more rules, guaranteed

  • ptimal complexity O(m + n log n), where

m = |union assertions|, n = |(sub)terms|

slide-78
SLIDE 78

Congruence Closure for Ground Horn Clauses

19

Extension to congruence closure: 7 more rules, guaranteed

  • ptimal complexity O(m + n log n), where

m = |union assertions|, n = |(sub)terms| Extension to ground Horn clauses with equality: 13 more rules

slide-79
SLIDE 79

Congruence Closure for Ground Horn Clauses

19

Extension to congruence closure: 7 more rules, guaranteed

  • ptimal complexity O(m + n log n), where

m = |union assertions|, n = |(sub)terms| Extension to ground Horn clauses with equality: 13 more rules Theorem [Ganzinger/McAllester 01] Satisfiability of a set D of ground Horn clauses with equality can be decided in time O(| |D| | + n log n + min(m log n, n2)) where m is the number

  • f antecedents and input clauses and n is the number of
  • terms. This is optimal ( = O(|

|D| |)) whenever m is in Ω(n2).

slide-80
SLIDE 80

Congruence Closure for Ground Horn Clauses

19

Extension to congruence closure: 7 more rules, guaranteed

  • ptimal complexity O(m + n log n), where

m = |union assertions|, n = |(sub)terms| Extension to ground Horn clauses with equality: 13 more rules Theorem [Ganzinger/McAllester 01] Satisfiability of a set D of ground Horn clauses with equality can be decided in time O(| |D| | + n log n + min(m log n, n2)) where m is the number

  • f antecedents and input clauses and n is the number of
  • terms. This is optimal ( = O(|

|D| |)) whenever m is in Ω(n2). Logic View: We can (partly) deal with logic programs with equality

slide-81
SLIDE 81

Congruence Closure for Ground Horn Clauses

19

Extension to congruence closure: 7 more rules, guaranteed

  • ptimal complexity O(m + n log n), where

m = |union assertions|, n = |(sub)terms| Extension to ground Horn clauses with equality: 13 more rules Theorem [Ganzinger/McAllester 01] Satisfiability of a set D of ground Horn clauses with equality can be decided in time O(| |D| | + n log n + min(m log n, n2)) where m is the number

  • f antecedents and input clauses and n is the number of
  • terms. This is optimal ( = O(|

|D| |)) whenever m is in Ω(n2). Logic View: We can (partly) deal with logic programs with equality Applications: several program analysis algorithms (Steensgaard, Henglein)

slide-82
SLIDE 82

Formal Notion of Redundancy

20

Let ≻ a well-founded ordering on ground atoms. Definition A is redundant in S (denoted A ∈ Red(S)) whenever A1, . . . , An | =R A, with Ai in S such that Ai ≺ A.

slide-83
SLIDE 83

Formal Notion of Redundancy

20

Let ≻ a well-founded ordering on ground atoms. Definition A is redundant in S (denoted A ∈ Red(S)) whenever A1, . . . , An | =R A, with Ai in S such that Ai ≺ A. Properties stable under enrichments and under deletion of redundant atoms

slide-84
SLIDE 84

Formal Notion of Redundancy

20

Let ≻ a well-founded ordering on ground atoms. Definition A is redundant in S (denoted A ∈ Red(S)) whenever A1, . . . , An | =R A, with Ai in S such that Ai ≺ A. Properties stable under enrichments and under deletion of redundant atoms Definition S is saturated up to redundancy wrt R if R(S \ Red(S)) ⊆ S ∪ Red(S).

slide-85
SLIDE 85

Formal Notion of Redundancy

20

Let ≻ a well-founded ordering on ground atoms. Definition A is redundant in S (denoted A ∈ Red(S)) whenever A1, . . . , An | =R A, with Ai in S such that Ai ≺ A. Properties stable under enrichments and under deletion of redundant atoms Definition S is saturated up to redundancy wrt R if R(S \ Red(S)) ⊆ S ∪ Red(S). Theorem If deletion is based on redundancy then the result of every computation is saturated wrt R up to redundancy.

slide-86
SLIDE 86

Formal Notion of Redundancy

20

Let ≻ a well-founded ordering on ground atoms. Definition A is redundant in S (denoted A ∈ Red(S)) whenever A1, . . . , An | =R A, with Ai in S such that Ai ≺ A. Properties stable under enrichments and under deletion of redundant atoms Definition S is saturated up to redundancy wrt R if R(S \ Red(S)) ⊆ S ∪ Red(S). Theorem If deletion is based on redundancy then the result of every computation is saturated wrt R up to redundancy. Corollary Priorities are irrelevant logically ⇒ choose them so as to minimize prefix firings

slide-87
SLIDE 87

Deletions based on Redundancy

21

Criterion: If r = [A1], . . . , [Ak], B1, . . . , Bm ⊃ B and if S ∪ {A1σ, . . . , Akσ, B1σ, . . . , Bmσ} is visible to r then Aiσ ∈ Red(S ∪ {B1σ, . . . , Bmσ, Bσ}).

slide-88
SLIDE 88

Deletions based on Redundancy

21

Criterion: If r = [A1], . . . , [Ak], B1, . . . , Bm ⊃ B and if S ∪ {A1σ, . . . , Akσ, B1σ, . . . , Bmσ} is visible to r then Aiσ ∈ Red(S ∪ {B1σ, . . . , Bmσ, Bσ}). Union-find example: not so easy to check, need proof orderings ` a la Bachmair and Dershowitz

slide-89
SLIDE 89

Deletions based on Redundancy

21

Criterion: If r = [A1], . . . , [Ak], B1, . . . , Bm ⊃ B and if S ∪ {A1σ, . . . , Akσ, B1σ, . . . , Bmσ} is visible to r then Aiσ ∈ Red(S ∪ {B1σ, . . . , Bmσ, Bσ}). Union-find example: not so easy to check, need proof orderings ` a la Bachmair and Dershowitz Note: redundancy should also be efficiently decidable

slide-90
SLIDE 90
  • III. Instance-based Priorities
slide-91
SLIDE 91

Shortest Paths

23

(Init) dist(src) ≤ 0 [ [dist(x) ≤ d] ] dist(x) ≤ d′ d′ < d (Upd) ⊤ dist(x) ≤ d x

c

→ y (Add) dist(y) ≤ c + d

slide-92
SLIDE 92

Shortest Paths

23

(Init) dist(src) ≤ 0 [ [dist(x) ≤ d] ] dist(x) ≤ d′ d′ < d (Upd) ⊤ dist(x) ≤ d x

c

→ y (Add) dist(y) ≤ c + d

Correctness: obvious; deletion is based on redundancy

slide-93
SLIDE 93

Shortest Paths

23

(Init) dist(src) ≤ 0 [ [dist(x) ≤ d] ] dist(x) ≤ d′ d′ < d (Upd) ⊤ dist(x) ≤ d x

c

→ y (Add) dist(y) ≤ c + d

Correctness: obvious; deletion is based on redundancy Priorities (Dijkstra): always choose an instance of (Add) where d is minimal ⇒ allow for instance-based rule priorities (Init) > (Upd) > (Add)[n/d] > (Add)[m/d], for m > n

slide-94
SLIDE 94

Shortest Paths

23

(Init) dist(src) ≤ 0 [ [dist(x) ≤ d] ] dist(x) ≤ d′ d′ < d (Upd) ⊤ dist(x) ≤ d x

c

→ y (Add) dist(y) ≤ c + d

Correctness: obvious; deletion is based on redundancy Priorities (Dijkstra): always choose an instance of (Add) where d is minimal ⇒ allow for instance-based rule priorities (Init) > (Upd) > (Add)[n/d] > (Add)[m/d], for m > n Prefix firing count: O(|E|), but Dijkstra’s algorithm runs in time O(|E| + |V | log |V |) ⇒

  • ne cannot expect a linear-time

meta-complexity theorem for instance-based priorities

slide-95
SLIDE 95

Minimum Spanning Tree

24

Basis: Union-find module

slide-96
SLIDE 96

Minimum Spanning Tree

24

Basis: Union-find module

[ [x

c

↔ y] ] x ⇒! z y ⇒! z (Del) T [ [x

c

↔ y] ] (Add) mst(x, c, y) union(x, y)

slide-97
SLIDE 97

Minimum Spanning Tree

24

Basis: Union-find module

[ [x

c

↔ y] ] x ⇒! z y ⇒! z (Del) T [ [x

c

↔ y] ] (Add) mst(x, c, y) union(x, y)

Priorities: (here needed for correctness) union−find > (Del) > (Add)[n/c] > (Add)[m/c], for m > n

slide-98
SLIDE 98

Minimum Spanning Tree

24

Basis: Union-find module

[ [x

c

↔ y] ] x ⇒! z y ⇒! z (Del) T [ [x

c

↔ y] ] (Add) mst(x, c, y) union(x, y)

Priorities: (here needed for correctness) union−find > (Del) > (Add)[n/c] > (Add)[m/c], for m > n Prefix firing count: O(|E| + |V | log |V |)

slide-99
SLIDE 99

3rd Meta-Complexity Theorem

25

Programs: as before but priorities of rule instances depend on first atom in antecedent and can be computed from the atom in constant time Theorem [in preparation] Let R be an inference system such that R∗(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D) log p) where p is the number of different priorities assigned to atoms in R∗(D). Corollary 2nd meta-complexity theorem is a special case Proof technically involved; uses priority queues with log time

  • perations; memory usage worse
slide-100
SLIDE 100

3rd Meta-Complexity Theorem

25

Programs: as before but priorities of rule instances depend on first atom in antecedent and can be computed from the atom in constant time Theorem [in preparation] Let R be an inference system such that R∗(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D) log p) where p is the number of different priorities assigned to atoms in R∗(D). Corollary 2nd meta-complexity theorem is a special case Proof technically involved; uses priority queues with log time

  • perations; memory usage worse
slide-101
SLIDE 101

3rd Meta-Complexity Theorem

25

Programs: as before but priorities of rule instances depend on first atom in antecedent and can be computed from the atom in constant time Theorem [in preparation] Let R be an inference system such that R∗(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D) log p) where p is the number of different priorities assigned to atoms in R∗(D). Corollary 2nd meta-complexity theorem is a special case Proof technically involved; uses priority queues with log time

  • perations; memory usage worse
slide-102
SLIDE 102

3rd Meta-Complexity Theorem

25

Programs: as before but priorities of rule instances depend on first atom in antecedent and can be computed from the atom in constant time Theorem [in preparation] Let R be an inference system such that R∗(D) is finite. Then some R(D) can be computed in time O(| |D| | + πR(D) log p) where p is the number of different priorities assigned to atoms in R∗(D). Corollary 2nd meta-complexity theorem is a special case Proof technically involved; uses priority queues with log time

  • perations; memory usage worse
slide-103
SLIDE 103

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-104
SLIDE 104

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-105
SLIDE 105

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-106
SLIDE 106

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-107
SLIDE 107

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-108
SLIDE 108

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-109
SLIDE 109

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems
slide-110
SLIDE 110

Further Issues and Questions

26

  • a concept for modules needed (cf. IJCAR paper)
  • deletion not always based on redundancy
  • “real equality” (on the meta-level)
  • how far do we get?
  • is deduction without deletion inherently less efficient?
  • implementation of instance-based priorities with schematic

priorities?

  • bounds for memory consumption
  • improved meta-complexity theorems