Implementing the Omega Test in HOL Outline: Basic Fourier-Motzkin - - PowerPoint PPT Presentation

implementing the omega test in hol
SMART_READER_LITE
LIVE PREVIEW

Implementing the Omega Test in HOL Outline: Basic Fourier-Motzkin - - PowerPoint PPT Presentation

Implementing the Omega Test in HOL Outline: Basic Fourier-Motzkin variable elimination Omegas extension to F-M variable elimination Implementing this in HOL On the need for efficiency in conversion to DNF ARG lunch p.1 Fourier-Motzkin


slide-1
SLIDE 1

Implementing the Omega Test in HOL

Outline: Basic Fourier-Motzkin variable elimination Omega’s extension to F-M variable elimination Implementing this in HOL On the need for efficiency in conversion to DNF

ARG lunch – p.1

slide-2
SLIDE 2

Fourier-Motzkin Variable Elimination

The basis for Hodes’s method (ARITH CONV in HOL and d.p.’s in Isabelle, ACL2 and Coq) Fundamental theorem:

(∃x. a ≤ αx ∧ βx ≤ b) ≡ aβ ≤ αb

True over R (and Q). . .

ARG lunch – p.2

slide-3
SLIDE 3

Fourier-Motzkin Variable Elimination

The basis for Hodes’s method (ARITH CONV in HOL and d.p.’s in Isabelle, ACL2 and Coq) Fundamental theorem:

(∃x. a ≤ αx ∧ βx ≤ b) ≡ aβ ≤ αb

True over R (and Q). . . . . . false over Z E.g., (∃x. 3 ≤ 2x ≤ 3) ≡ 6 ≤ 6

ARG lunch – p.2

slide-4
SLIDE 4

FMVE—Multiple Constraints

Let L(x) be conjunction of lower bounds on x, indexed by i, of the form a

  • ≤ α
  • x

  • > 0).

Let U(x) be conjunction of upper bounds on x, indexed by j, of the form β

x ≤ b

> 0).

Want to show:

(∃x. L(x) ∧ U(x)) ≡

  • ✄✂

a

  • β

≤ α

  • b

On assumption that core theorem is true. (Similar “ex- tension to n × m constraints” proofs are required for theorems over Z.)

ARG lunch – p.3

slide-5
SLIDE 5

Multiple Constraints: Induction #1

Many upper bounds, one lower bound. Have:

(∃x. a ≤ αx ∧ U(x)) ≡

≤ αb

Want

(∃x. a ≤ αx ∧ βx ≤ b ∧ U(x)) ≡

≤ αb

∧ aβ ≤ αb

ARG lunch – p.4

slide-6
SLIDE 6

Multiple Constraints: Induction #1

Many upper bounds, one lower bound. Have:

(∃x. a ≤ αx ∧ U(x)) ≡

≤ αb

Want

(∃x. a ≤ αx ∧ βx ≤ b ∧ U(x)) ≡

≤ αb

∧ aβ ≤ αb

Left to right is easy: I.H. gives first conjunct; core theorem gives second.

ARG lunch – p.4

slide-7
SLIDE 7

Multiple Constraints: Induction #1

Many upper bounds, one lower bound. Have:

(∃x. a ≤ αx ∧ U(x)) ≡

≤ αb

Want

(∃x. a ≤ αx ∧ βx ≤ b ∧ U(x)) ≡

≤ αb

∧ aβ ≤ αb

Right to left: I.H. gives us ∃y. a ≤ αy ∧ U(y)

ARG lunch – p.5

slide-8
SLIDE 8

Multiple Constraints: Induction #1

Many upper bounds, one lower bound. Have:

(∃x. a ≤ αx ∧ U(x)) ≡

≤ αb

Want

(∃x. a ≤ αx ∧ βx ≤ b ∧ U(x)) ≡

≤ αb

∧ aβ ≤ αb

Right to left: I.H. gives us ∃y. a ≤ αy ∧ U(y) Core theorem gives ∃z. a ≤ αz ∧ βz ≤ b

ARG lunch – p.5

slide-9
SLIDE 9

Multiple Constraints: Induction #1

Many upper bounds, one lower bound. Have:

(∃x. a ≤ αx ∧ U(x)) ≡

≤ αb

Want

(∃x. a ≤ αx ∧ βx ≤ b ∧ U(x)) ≡

≤ αb

∧ aβ ≤ αb

Right to left: I.H. gives us ∃y. a ≤ αy ∧ U(y) Core theorem gives ∃z. a ≤ αz ∧ βz ≤ b

y and z both satisfy (a, α)-constraint. Minimum of y

and z will satisfy both upper-bound constraints.

ARG lunch – p.5

slide-10
SLIDE 10

Multiple Constraints: Induction #2

n upper bounds, m lower bounds. Have: (∃x. L(x) ∧ U(x)) ≡

✡ ☛

a

β

≤ α

b

Want

(∃x. a ≤ αx ∧ L(x) ∧ U(x)) ≡

✡ ☛

a

β

≤ α

b

≤ αb

ARG lunch – p.6

slide-11
SLIDE 11

Multiple Constraints: Induction #2

n upper bounds, m lower bounds. Have: (∃x. L(x) ∧ U(x)) ≡

✌ ✍

a

β

≤ α

b

Want

(∃x. a ≤ αx ∧ L(x) ∧ U(x)) ≡

✌ ✍

a

β

≤ α

b

≤ αb

Left to right: first conjunct by I.H.; second by appeal to induction #1

ARG lunch – p.6

slide-12
SLIDE 12

Multiple Constraints: Induction #2

n upper bounds, m lower bounds. Have: (∃x. L(x) ∧ U(x)) ≡

✏ ✑

a

β

≤ α

b

Want

(∃x. a ≤ αx ∧ L(x) ∧ U(x)) ≡

✏ ✑

a

β

≤ α

b

≤ αb

Right to left: I.H. gives ∃y. L(y) ∧ U(y).

ARG lunch – p.7

slide-13
SLIDE 13

Multiple Constraints: Induction #2

n upper bounds, m lower bounds. Have: (∃x. L(x) ∧ U(x)) ≡

✓ ✔

a

β

≤ α

b

Want

(∃x. a ≤ αx ∧ L(x) ∧ U(x)) ≡

✓ ✔

a

β

≤ α

b

≤ αb

Right to left: I.H. gives ∃y. L(y) ∧ U(y). Induction #1 gives ∃z. a ≤ αz ∧ U(z).

ARG lunch – p.7

slide-14
SLIDE 14

Multiple Constraints: Induction #2

n upper bounds, m lower bounds. Have: (∃x. L(x) ∧ U(x)) ≡

✖ ✗

a

β

≤ α

b

Want

(∃x. a ≤ αx ∧ L(x) ∧ U(x)) ≡

✖ ✗

a

β

≤ α

b

≤ αb

Right to left: I.H. gives ∃y. L(y) ∧ U(y). Induction #1 gives ∃z. a ≤ αz ∧ U(z).

y and z both satisfy U. Take their maximum to satisfy L and the other lower bound constraint.

ARG lunch – p.7

slide-15
SLIDE 15

Exact Shadow Elimination

The formula

✙ ✚

a

β

≤ α

b

is known as the real shadow. If all of the α

  • r all of

the β

are equal to 1, then we can use it to eliminate quantifiers over Z.

ARG lunch – p.8

slide-16
SLIDE 16

Exact Shadow Elimination

The formula

✜ ✢

a

β

≤ α

b

is known as the real shadow. If all of the α

  • r all of

the β

are equal to 1, then we can use it to eliminate quantifiers over Z. The core theorem

(∃x. a ≤ αx ∧ βx ≤ b) ≡ aβ ≤ αb

is true over Z because. . .

ARG lunch – p.8

slide-17
SLIDE 17

Exact Shadow Elimination

The formula

✤ ✥

a

β

≤ α

b

is known as the real shadow. If all of the α

  • r all of

the β

are equal to 1, then we can use it to eliminate quantifiers over Z. The core theorem

(∃x. a ≤ αx ∧ βx ≤ b) ≡ aβ ≤ αb

is true over Z because. . . left to right: transitivity still holds

ARG lunch – p.8

slide-18
SLIDE 18

Exact Shadow Elimination

The formula

✧ ★

a

β

≤ α

b

is known as the real shadow. If all of the α

  • r all of

the β

are equal to 1, then we can use it to eliminate quantifiers over Z. The core theorem

(∃x. a ≤ αx ∧ βx ≤ b) ≡ aβ ≤ αb

is true over Z because. . . left to right: transitivity still holds right to left: take x = b if β = 1, x = a if α = 1

ARG lunch – p.8

slide-19
SLIDE 19

Shadows with Splinters

Pugh claims that exact shadow eliminations

  • ccur frequently

Otherwise, following theorem required: Let m be the maximum of all the β

  • s. Then

(∃x. L(x) ∧ U(x)) ≡ (

✪✄✫ ✩

− 1)(β

− 1) ≤ α

b

− a

β

) ∨

✭✮ ✯ ✰ ✮ ✯ ✰ ✭ ✭ ✱ ✲

=0

∃x. (α

x = a

+ k) ∧ L(x) ∧ U(x)

First disjunct known as dark shadow. Other disjuncts known as splinters

ARG lunch – p.9

slide-20
SLIDE 20

Proof of Core Omega Theorem

Result is of form

(∃x. L(x) ∧ U(x)) ≡ “dark shadow” ∨ “splinters”

Proof has three cases: “dark shadow”

⇒ ∃x. L(x) ∧ U(x)

“splinters”

⇒ ∃x. L(x) ∧ U(x) (∃x. L(x) ∧ U(x)) ∧ ¬“dark shadow” ⇒

“splinters”

ARG lunch – p.10

slide-21
SLIDE 21

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

ARG lunch – p.11

slide-22
SLIDE 22

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

ARG lunch – p.11

slide-23
SLIDE 23

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb

ARG lunch – p.11

slide-24
SLIDE 24

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

ARG lunch – p.11

slide-25
SLIDE 25

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

Have 0 < αβ(i + 1) − αb

ARG lunch – p.11

slide-26
SLIDE 26

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

Have 0 < αβ(i + 1) − αb, so 1 ≤ β(i + 1) − b

ARG lunch – p.11

slide-27
SLIDE 27

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

Have 0 < αβ(i + 1) − αb, so 1 ≤ β(i + 1) − b, so

α ≤ αβ(i + 1) − αb.

ARG lunch – p.11

slide-28
SLIDE 28

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

Have 0 < αβ(i + 1) − αb, so 1 ≤ β(i + 1) − b, so

α ≤ αβ(i + 1) − αb. Similarly, β ≤ aβ − αβi.

ARG lunch – p.11

slide-29
SLIDE 29

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

Have 0 < αβ(i + 1) − αb, so 1 ≤ β(i + 1) − b, so

α ≤ αβ(i + 1) − αb. Similarly, β ≤ aβ − αβi.

Thus, α + β ≤ αβ + aβ − αb.

ARG lunch – p.11

slide-30
SLIDE 30

Core Omega Theorem—Case 1

  • i,j(αi − 1)(βj − 1) ≤ αibj − aiβj

⇒ ∃x. L(x) ∧ U(x)

Do singleton case, extend by two inductions as before:

(α − 1)(β − 1) ≤ αb − aβ ⇒ ∃x. a ≤ αx ∧ βx ≤ b

Assume opposite, so ¬∃x. aβ ≤ αβx ≤ αb No multiple of αβ between aβ and αb, so

∃i. αβi < aβ ≤ αb < αβ(i + 1)

Have 0 < αβ(i + 1) − αb, so 1 ≤ β(i + 1) − b, so

α ≤ αβ(i + 1) − αb. Similarly, β ≤ aβ − αβi.

Thus, α + β ≤ αβ + aβ − αb. Rearrange to αb − aβ < αβ − α − β + 1. Contradicts assumption!

ARG lunch – p.11

slide-31
SLIDE 31

Core Omega Theorem—Case 2

✵✶ ✷ ✸ ✶ ✷ ✸ ✵ ✵ ✹ ✺

=0

∃x. (α

x = a

+ k) ∧ L(x) ∧ U(x) ⇒ ∃x. L(x) ∧ U(x)

ARG lunch – p.12

slide-32
SLIDE 32

Core Omega Theorem—Case 2

✽✾ ✿ ❀ ✾ ✿ ❀ ✽ ✽ ❁ ❂

=0

∃x. (α

x = a

+ k) ∧ L(x) ∧ U(x) ⇒ ∃x. L(x) ∧ U(x)

Trivial, each disjunct provides a witness that satisfies

L and U.

ARG lunch – p.12

slide-33
SLIDE 33

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

ARG lunch – p.13

slide-34
SLIDE 34

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U)

ARG lunch – p.13

slide-35
SLIDE 35

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U) From assumption #1, βx ≤ b

ARG lunch – p.13

slide-36
SLIDE 36

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U) From assumption #1, βx ≤ b, so αβx ≤ αb.

ARG lunch – p.13

slide-37
SLIDE 37

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

  • k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U) From assumption #1, βx ≤ b, so αβx ≤ αb. Combining,

αβx ≤ aβ + αβ − β − α

ARG lunch – p.13

slide-38
SLIDE 38

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U) From assumption #1, βx ≤ b, so αβx ≤ αb. Combining,

αβx ≤ aβ + αβ − β − α ⇒ β(αx − a) ≤ αβ − β − α

ARG lunch – p.13

slide-39
SLIDE 39

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U) From assumption #1, βx ≤ b, so αβx ≤ αb. Combining,

αβx ≤ aβ + αβ − β − α ⇒ β(αx − a) ≤ αβ − β − α ⇒ αx − a ≤

▼❖◆ ▼❖◆ ▲ ▼
  • ARG lunch – p.13
slide-40
SLIDE 40

Core Omega Theorem—Case 3

(∃x. L(x) ∧ U(x)) ∧ ¬(

i,j(αi − 1)(βj − 1) ≤ αibj − aiβj) ⇒

  • i
  • P

mαi−αi−m m

k=0

∃x. (αix = ai + k) ∧ L(x) ∧ U(x)

Negated second assumption means

αb − aβ ≤ αβ − β − α

(α, β, a, and b from L, U) From assumption #1, βx ≤ b, so αβx ≤ αb. Combining,

αβx ≤ aβ + αβ − β − α ⇒ β(αx − a) ≤ αβ − β − α ⇒ αx − a ≤

❙❖❚ ❙❖❚ ❘ ❙
❘ ❚ ❘ ❚ ❯ ❯
  • Can now pick appropriate splinter-disjunct and we’re

done.

ARG lunch – p.13

slide-41
SLIDE 41

Splinters’ Equality Constraints

Splinters’ ∃x. (cx = e) ∧ . . . can be eliminated: Multiply through other terms to include cx, which sub-terms are replaced by e Pull in quantifier, and formula becomes c|e.

ARG lunch – p.14

slide-42
SLIDE 42

Eliminating Divisibility Constraints

To eliminate c|(dx + e) with x existentially quantified: Reduce all coefficients on RHS by taking “mod c” Introduce fresh existential variable y:

∃y. cy = dx + e d < c, so eliminate x, to get d|e − cy

  • Iterate. . .

ARG lunch – p.15

slide-43
SLIDE 43

Eliminating Divisibility Constraints

To eliminate c|(dx + e) with x existentially quantified: Reduce all coefficients on RHS by taking “mod c” Introduce fresh existential variable y:

∃y. cy = dx + e d < c, so eliminate x, to get d|e − cy

  • Iterate. . .

To eliminate ¬(c|e) convert to

❱❳❲

1

=1

c|(e + i)

ARG lunch – p.15

slide-44
SLIDE 44

Implementation #1

Prove core theorems in HOL Code for decision procedure is specialised rewriting engine it transforms equivalent terms to equivalent terms All the work is done in the logic

ARG lunch – p.16

slide-45
SLIDE 45

Implementation #1 (continued)

Hard to prove facts or define functions over formulas directly formulas would have type :bool, or perhaps

:int -> bool

Instead, define special syntax and evaluation functions. I have two functions, evallower and

evalupper, both of type :int -> (num # int) list -> bool

These provide a concrete (or “shadow”) syntax to manipulate

ARG lunch – p.17

slide-46
SLIDE 46

Evaluation Functions

Defining equations:

evallower x [] = T evallower x ((c,y)::cs) = y <= &c * x /\ evallower x cs evalupper x [] = T evalupper x ((c,y)::cs) = &c * x <= y /\ evalupper x cs

ARG lunch – p.18

slide-47
SLIDE 47

Evaluation Functions

Now some arbitrary HOL formula

?x. 3 + y <= x ...

can be converted into a standard form

?x. evallower x lows /\ evalupper x ups

and most importantly, it’s a theorem:

|- <oldform> = ?x. evallower ...

Now appeal to the core theorem. . .

ARG lunch – p.19

slide-48
SLIDE 48

Core Theorems in HOL

With splinters (the “nightmare scenario”):

|- !uppers lowers m. EVERY fst_nzero uppers /\ EVERY fst_nzero lowers /\ EVERY (\p. FST p <= m) uppers ==> ((?x. evalupper x uppers /\ evallower x lowers) = dark_shadow uppers lowers \/ ?x. nightmare x m uppers lowers lowers)

Or, if exact shadow elimination is possible, use

|- !uppers lowers. EVERY fst_nzero uppers /\ EVERY fst_nzero lowers ==> EVERY fst1 uppers \/ EVERY fst1 lowers ==> ((?x. evalupper x uppers /\ evallower x lowers) = real_shadow uppers lowers)

ARG lunch – p.20

slide-49
SLIDE 49

Calculating Shadows in the Logic

Two characterisations of real shadow: Logical:

|- real_shadow uppers lowers = !c d L R. MEM (c,L) uppers /\ MEM (d,R) lowers ==> &c * R <= &d * L

Made for rewriting:

|- (real_shadow [] lowers = T) /\ (real_shadow (upper::us) lowers = rshadow_row upper lowers /\ real_shadow us lowers) |- (rshadow_row (upc,upy) [] = T) /\ (rshadow_row (upc,upy) ((lowc,lowy)::rs) = &upc * lowy <= &lowc * upy /\ rshadow_row (upc,upy) rs)

ARG lunch – p.21

slide-50
SLIDE 50

Implementation #2

Design philosophy: For purely existential or universal goals If goal is unsatisfiable, find a proof of contradiction outside the logic If goal is satisfiable, find satisfying assignment

  • utside the logic.

This approach can “win big” because every variable elimination turns n constraints into O(n2) new constraints. Having to manage this explosion in the logic is costly, particularly as most of the work is redundant.

ARG lunch – p.22

slide-51
SLIDE 51

Implementation #2: Details

Calculate the real shadow for all of the variables. If this is false, so too is the original (existential) goal. If inexact, then calculate dark shadow. If this provides satisfying assignment, we’re done. Otherwise, must resort to (symbolic) splinters. Calculate new constraints outside the logic, but accompany each with a data structure that would allow it to be proved if necessary:

datatype derivation = ASM of term | REAL_COMBIN of int * derivation * derivation | GCD_CHECK of derivation | DIRECT_CONTR of derivation * derivation

(Richard’s ARITH CONV uses closures here.)

ARG lunch – p.23

slide-52
SLIDE 52

Implementation #2: More Details

If the system is reduced to one variable x, it will have at most two constraints: x ≤ c and d ≤ x. If

d ≤ c return satisfying assignment with x → c.

Recurse back through other variables. Otherwise, derive contradiction.

ARG lunch – p.24

slide-53
SLIDE 53

Implementation #2: More Details

If the system is reduced to one variable x, it will have at most two constraints: x ≤ c and d ≤ x. If

d ≤ c return satisfying assignment with x → c.

Recurse back through other variables. Otherwise, derive contradiction. If the system ever has a variable x with only upper (lower) bounds, return satisfying assignment with all other present variables to zero, and x to minimum (maximum) of resulting

  • constants. Recurse back through other variables.

ARG lunch – p.24

slide-54
SLIDE 54

Interpreting Results

If doing exact shadow elimination, both satisfying assignments and contradictions are valid. If doing a real (but inexact) shadow, only contradictions make sense. If doing a dark shadow, only satisfying assignments make sense. Above summarised by theorems

(∃x. L(x) ∧ U(x)) ≡

“real shadow”

(if exact) (∃x. L(x) ∧ U(x)) ⇒

“real shadow” “dark shadow”

⇒ (∃x. L(x) ∧ U(x))

ARG lunch – p.25

slide-55
SLIDE 55

The Trials of DNF

For solving “typical”, interactive goals, the Achilles Heel of this algorithm is the requirement to convert to DNF . This happens in two places.

ARG lunch – p.26

slide-56
SLIDE 56

The Trials of DNF

For solving “typical”, interactive goals, the Achilles Heel of this algorithm is the requirement to convert to DNF . This happens in two places.

  • Initially. Goals involving natural number

subtraction can be particularly bad. Cooper’s algorithm (though generally slower) solves this faster than Omega:

!m n. 0 < m /\ 0 < n ==> ((PRE m = PRE n) = (m = n))

ARG lunch – p.26

slide-57
SLIDE 57

The Trials of DNF

For solving “typical”, interactive goals, the Achilles Heel of this algorithm is the requirement to convert to DNF . This happens in two places.

  • Initially. Goals involving natural number

subtraction can be particularly bad. Cooper’s algorithm (though generally slower) solves this faster than Omega:

!m n. 0 < m /\ 0 < n ==> ((PRE m = PRE n) = (m = n))

With alternating quantifiers. ∀x. ∃y. P (x, y) will be converted to ¬∃x. ¬∃y. P (x, y). When the innermost quantifier is eliminated, the negation must be pushed in over it, and everything converted back to DNF .

ARG lunch – p.26

slide-58
SLIDE 58

The importance of “

Pugh and Wonnacott talk about using a gist

  • peration to simplify terms of the form

P ∧ (∃x. . . . )

where the . . . gets rewritten while assuming P. This is contextual rewriting with the theorem

(P ⇒ (Q ≡ Q

)) ⇒ (P ∧ Q ≡ P ∧ Q

)

Implementing this well is vital to good performance. It made a difference of an order of magnitude on a VCG-generated goal of Peter Homeier’s.

ARG lunch – p.27

slide-59
SLIDE 59

Summary

The Omega Test is extension of well-known Fourier-Motzkin variable elimination technique. In all those cases where existing, incomplete FMVE for integers works, Omega Test will work just as well. Omega is also complete, so can additionally solve goals with any arrangement of quantifiers. Implemented in Kananaskis release of HOL.

ARG lunch – p.28