Iris: a framework for higher-order concurrent separation logic in - - PowerPoint PPT Presentation

iris a framework for higher order concurrent separation
SMART_READER_LITE
LIVE PREVIEW

Iris: a framework for higher-order concurrent separation logic in - - PowerPoint PPT Presentation

Iris: a framework for higher-order concurrent separation logic in Coq Robbert Krebbers 1 Delft University of Technology, The Netherlands January 15, 2017 @ TTT, Paris, France 1 Iris is joint work with: Ralf Jung, Jacques-Hendri Jourdan, Ale s


slide-1
SLIDE 1

1

Iris: a framework for higher-order concurrent separation logic in Coq

Robbert Krebbers1

Delft University of Technology, The Netherlands

January 15, 2017 @ TTT, Paris, France

1Iris is joint work with: Ralf Jung, Jacques-Hendri Jourdan, Aleˇ

s Bizjak, David Swasey, Filip Sieczkowski, Kasper Svendsen, Aaron Turon, Amin Timany, Derek Dreyer, and Lars Birkedal

slide-2
SLIDE 2

2

What is Iris?

Language independent higher-order separation logic with a simple foundations for modular reasoning about fine-grained concurrency in Coq.

slide-3
SLIDE 3

2

What is Iris?

Language independent higher-order separation logic with a simple foundations for modular reasoning about fine-grained concurrency in Coq.

◮ Fine-grained concurrency: synchronization primitives and

lock-free data structures are implemented

slide-4
SLIDE 4

2

What is Iris?

Language independent higher-order separation logic with a simple foundations for modular reasoning about fine-grained concurrency in Coq.

◮ Fine-grained concurrency: synchronization primitives and

lock-free data structures are implemented

◮ Modular: reusable and composable specifications

slide-5
SLIDE 5

2

What is Iris?

Language independent higher-order separation logic with a simple foundations for modular reasoning about fine-grained concurrency in Coq.

◮ Fine-grained concurrency: synchronization primitives and

lock-free data structures are implemented

◮ Modular: reusable and composable specifications ◮ Language independent: parametrized by the language

slide-6
SLIDE 6

2

What is Iris?

Language independent higher-order separation logic with a simple foundations for modular reasoning about fine-grained concurrency in Coq.

◮ Fine-grained concurrency: synchronization primitives and

lock-free data structures are implemented

◮ Modular: reusable and composable specifications ◮ Language independent: parametrized by the language ◮ Simple foundations: small set of primitive rules

slide-7
SLIDE 7

2

What is Iris?

Language independent higher-order separation logic with a simple foundations for modular reasoning about fine-grained concurrency in Coq.

◮ Fine-grained concurrency: synchronization primitives and

lock-free data structures are implemented

◮ Modular: reusable and composable specifications ◮ Language independent: parametrized by the language ◮ Simple foundations: small set of primitive rules ◮ Coq: provides practical support for doing proofs in Iris

slide-8
SLIDE 8

3

The versatility of Iris

The scope of Iris goes beyond proving traditional program correctness using Hoare triples:

◮ The Rust type system (Jung, Jourdan, Dreyer, Krebbers) ◮ Logical relations (Krogh-Jespersen, Svendsen, Timany, Birkedal, Tassarotti, Jung, Krebbers) ◮ Weak memory concurrency (Kaiser, Dang, Dreyer, Lahav, Vafeiadis) ◮ Object calculi (Swasey, Dreyer, Garg) ◮ Logical atomicity (Krogh-Jespersen, Zhang, Jung) ◮ Defining Iris (Krebbers, Jung, Jourdan, Bizjak, Dreyer, Birkedal)

Most of these projects are formalized in Iris in Coq

slide-9
SLIDE 9

4

This talk

Show that ideas from concurrent separation logic can be used to encode concurrent separation logic itself Why?

◮ Smaller base logic ◮ Notions like Hoare triples {P} e {Q} are not primitives, but:

◮ are defined in the logic ◮ at a higher level of abstraction ◮ resulting in simpler proofs

◮ Gives a better intellectual understanding ◮ Eases the formalization in

Coq

slide-10
SLIDE 10

5

Preview of the rules of the Iris base logic

Laws of (affine) bunched implications True ∗ P ⊣⊢ P P ∗ Q ⊢ Q ∗ P (P ∗ Q) ∗ R ⊢ P ∗ (Q ∗ R) P1 ⊢ Q1 P2 ⊢ Q2 P1 ∗ P2 ⊢ Q1 ∗ Q2 P ∗ Q ⊢ R P ⊢ Q − ∗ R P ⊢ Q − ∗ R P ∗ Q ⊢ R Laws for resources and validity Own(a) ∗ Own(b) ⊣⊢ Own(a · b) True ⊢ Own(ε) Own(a) ⊢ Own(|a|) Own(a) ⊢ V(a) V(a · b) ⊢ V(a) V(a) ⊢ V(a) Laws for the basic update modality P ⊢ Q | ⇛P ⊢ | ⇛Q P ⊢ | ⇛P | ⇛| ⇛P ⊢ | ⇛P Q ∗ | ⇛P ⊢ | ⇛(Q ∗ P) a B Own(a) ⊢ | ⇛∃b ∈ B. Own(b) Laws for the always modality P ⊢ Q P ⊢ Q P ⊢ P True ⊢ True (P ∧ Q) ⊢ (P ∗ Q) P ∧ Q ⊢ P ∗ Q P ⊢ P ∀x. P ⊢ ∀x. P ∃x. P ⊢ ∃x. P Laws for the later modality P ⊢ Q ⊲ P ⊢ ⊲ Q (⊲ P ⇒ P) ⊢ P ∀x. ⊲ P ⊢ ⊲ ∀x. P ⊲ ∃x. P ⊢ ⊲ False ∨ ∃x. ⊲ P ⊲ (P ∗ Q) ⊣⊢ ⊲ P ∗ ⊲ Q ⊲ P ⊣⊢ ⊲ P Laws for timeless assertions ⊲ P ⊢ ⊲ False ∨ (⊲ False ⇒ P) ⊲ Own(a) ⊢ ∃b. Own(b) ∧ ⊲(a = b)

slide-11
SLIDE 11

6

Part #1: brief introduction to concurrent separation logic (CSL)

slide-12
SLIDE 12

7

Hoare triples

Hoare triples for partial program correctness: {P}e{w. Q} Precondition Binder for return value Postcondition If the initial state satisfies P, then:

◮ e does not get stuck/crash ◮ if e terminates with value v, the final state satisfies Q[v/w]

slide-13
SLIDE 13

8

Separation logic [O’Hearn, Reynolds, Yang]

The points-to connective x → v

◮ provides the knowledge that location x has value v, and ◮ provides exclusive ownership of x

Separating conjunction P ∗ Q: the state consists of disjoint parts satisfying P and Q

slide-14
SLIDE 14

8

Separation logic [O’Hearn, Reynolds, Yang]

The points-to connective x → v

◮ provides the knowledge that location x has value v, and ◮ provides exclusive ownership of x

Separating conjunction P ∗ Q: the state consists of disjoint parts satisfying P and Q Example: {x → v1 ∗ y → v2}swap(x, y){w. w = () ∧ x → v2 ∗ y → v1} the ∗ ensures that x and y are different

slide-15
SLIDE 15

9

Concurrent separation logic [O’Hearn]

The par rule: {P1}e1{Q1} {P2}e2{Q2} {P1 ∗ P2}e1||e2{Q1 ∗ Q2}

slide-16
SLIDE 16

9

Concurrent separation logic [O’Hearn]

The par rule: {P1}e1{Q1} {P2}e2{Q2} {P1 ∗ P2}e1||e2{Q1 ∗ Q2} For example: {x → 4 ∗ y → 6} x := ! x + 2 y := ! y + 2 {x → 6 ∗ y → 8}

slide-17
SLIDE 17

9

Concurrent separation logic [O’Hearn]

The par rule: {P1}e1{Q1} {P2}e2{Q2} {P1 ∗ P2}e1||e2{Q1 ∗ Q2} For example: {x → 4 ∗ y → 6} {x → 4} {y → 6} x := ! x + 2 y := ! y + 2 {x → 6 ∗ y → 8}

slide-18
SLIDE 18

9

Concurrent separation logic [O’Hearn]

The par rule: {P1}e1{Q1} {P2}e2{Q2} {P1 ∗ P2}e1||e2{Q1 ∗ Q2} For example: {x → 4 ∗ y → 6} {x → 4} {y → 6} x := ! x + 2 y := ! y + 2 {x → 6} {y → 8} {x → 6 ∗ y → 8}

slide-19
SLIDE 19

9

Concurrent separation logic [O’Hearn]

The par rule: {P1}e1{Q1} {P2}e2{Q2} {P1 ∗ P2}e1||e2{Q1 ∗ Q2} For example: {x → 4 ∗ y → 6} {x → 4} {y → 6} x := ! x + 2 y := ! y + 2 {x → 6} {y → 8} {x → 6 ∗ y → 8} Works great for concurrent programs without shared memory: concurrent quick sort, concurrent merge sort, . . .

slide-20
SLIDE 20

10

What about shared state/racy programs?

A classic problem: let x = ref(0) in fetchandadd(x, 2) fetchandadd(x, 2) ! x Where fetchandadd(x, y) is the atomic version of x := ! x + y.

slide-21
SLIDE 21

10

What about shared state/racy programs?

A classic problem: {True} let x = ref(0) in fetchandadd(x, 2) fetchandadd(x, 2) ! x {w. w = 4} Where fetchandadd(x, y) is the atomic version of x := ! x + y.

slide-22
SLIDE 22

10

What about shared state/racy programs?

A classic problem: {True} let x = ref(0) in {x → 0} fetchandadd(x, 2) fetchandadd(x, 2) ! x {w. w = 4} Where fetchandadd(x, y) is the atomic version of x := ! x + y.

slide-23
SLIDE 23

10

What about shared state/racy programs?

A classic problem: {True} let x = ref(0) in {x → 0} {??} {??} fetchandadd(x, 2) fetchandadd(x, 2) {??} {??} ! x {w. w = 4} Where fetchandadd(x, y) is the atomic version of x := ! x + y. Problem: can only give ownership of x to one thread

slide-24
SLIDE 24

11

Invariants

The invariant assertion R expresses that R is maintained as an invariant on the state

slide-25
SLIDE 25

11

Invariants

The invariant assertion R expresses that R is maintained as an invariant on the state Invariant opening:

{R ∗ P} e {R ∗ Q}

e atomic R ⊢ {P} e {Q}

slide-26
SLIDE 26

11

Invariants

The invariant assertion R expresses that R is maintained as an invariant on the state Invariant opening:

{R ∗ P} e {R ∗ Q}

e atomic R ⊢ {P} e {Q} Invariant allocation: R ⊢ {P} e {Q}

{R ∗ P} e {Q}

slide-27
SLIDE 27

11

Invariants

The invariant assertion R

Nexpresses that R is maintained

as an invariant on the state Invariant opening:

{R ∗ P} e {R ∗ Q}E

e atomic R

N ⊢ {P} e {Q}E⊎N

Invariant allocation: R

N ⊢ {P} e {Q}E

{R ∗ P} e {Q}E

Technical detail: names are needed to avoid reentrancy, i.e.,

  • pening the same invariant twice
slide-28
SLIDE 28

11

Invariants

The invariant assertion R

Nexpresses that R is maintained

as an invariant on the state Invariant opening:

{ ⊲ R ∗ P} e { ⊲ R ∗ Q}E

e atomic R

N ⊢ {P} e {Q}E⊎N

Invariant allocation: R

N ⊢ {P} e {Q}E

{ ⊲ R ∗ P} e {Q}

Technical detail: names are needed to avoid reentrancy, i.e.,

  • pening the same invariant twice

Other technical detail: the later ⊲ is needed to support impredicative invariants, i.e., . . . R

N2 . . . N1

slide-29
SLIDE 29

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. even(n)}

slide-30
SLIDE 30

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. even(n)}

slide-31
SLIDE 31

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} allocate ∃n. x → n ∧ even(n) fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. even(n)}

slide-32
SLIDE 32

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} allocate ∃n. x → n ∧ even(n) {True} {True} fetchandadd(x, 2) fetchandadd(x, 2) {True} {True} ! x {n. even(n)}

slide-33
SLIDE 33

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} allocate ∃n. x → n ∧ even(n) {True} {True} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} fetchandadd(x, 2) {True} {True} ! x {n. even(n)}

slide-34
SLIDE 34

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} allocate ∃n. x → n ∧ even(n) {True} {True} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} {True} {True} ! x {n. even(n)}

slide-35
SLIDE 35

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} allocate ∃n. x → n ∧ even(n) {True} {True} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} {True} {True} {x → n ∧ even(n)} ! x {n. x → n ∧ even(n)} {n. even(n)}

slide-36
SLIDE 36

12

Invariants in action

Let us consider a simpler problem first: {True} let x = ref(0) in {x → 0} allocate ∃n. x → n ∧ even(n) {True} {True} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} {x → n ∧ even(n)} fetchandadd(x, 2) {x → n + 2 ∧ even(n + 2)} {True} {True} {x → n ∧ even(n)} ! x {n. x → n ∧ even(n)} {n. even(n)} Problem: still cannot prove it returns 4

slide-37
SLIDE 37

13

Ghost variables

Consider the invariant: ∃n. x → n ∗ . . . How to relate the quantified value to the state of the threads?

slide-38
SLIDE 38

13

Ghost variables

Consider the invariant: ∃n. x → n ∗ . . . How to relate the quantified value to the state of the threads? Solution: ghost variables

slide-39
SLIDE 39

13

Ghost variables

Consider the invariant: ∃n. x → n ∗ . . . How to relate the quantified value to the state of the threads? Solution: ghost variables Ghost variables are allocated in pairs: True ≡ −

∃γ. γ ֒ →

  • n

in the invariant

∗ γ ֒ →

  • n

in the Hoare triple

slide-40
SLIDE 40

13

Ghost variables

Consider the invariant: ∃n1, n2. x → (n1 + n2) ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

How to relate the quantified value to the state of the threads? Solution: ghost variables Ghost variables are allocated in pairs: True ≡ −

∃γ. γ ֒ →

  • n

in the invariant

∗ γ ֒ →

  • n

in the Hoare triple

slide-41
SLIDE 41

13

Ghost variables

Consider the invariant: ∃n1, n2. x → (n1 + n2) ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

How to relate the quantified value to the state of the threads? Solution: ghost variables Ghost variables are allocated in pairs: True ≡ −

∃γ. γ ֒ →

  • n

in the invariant

∗ γ ֒ →

  • n

in the Hoare triple

When you own both parts you obtain that the values are equal and can update both parts: γ ֒ →

  • n ∗ γ ֒

  • m

⇒ n = m γ ֒ →

  • n ∗ γ ֒

  • m

≡ −

γ ֒ →

  • n′ ∗ γ ֒

  • n′
slide-42
SLIDE 42

14

Ghost variables in action

{True} let x = ref(0) in fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. n = 4}

slide-43
SLIDE 43

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. n = 4}

slide-44
SLIDE 44

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. n = 4}

slide-45
SLIDE 45

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. n = 4}

slide-46
SLIDE 46

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. n = 4}

slide-47
SLIDE 47

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

fetchandadd(x, 2) fetchandadd(x, 2) ! x {n. n = 4}

slide-48
SLIDE 48

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

fetchandadd(x, 2) fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-49
SLIDE 49

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-50
SLIDE 50

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-51
SLIDE 51

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-52
SLIDE 52

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-53
SLIDE 53

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-54
SLIDE 54

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-55
SLIDE 55

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-56
SLIDE 56

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-57
SLIDE 57

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

! x {n. n = 4}

slide-58
SLIDE 58

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

! x {n. n = 4}

slide-59
SLIDE 59

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → 4 ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-60
SLIDE 60

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → 4 ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4}

slide-61
SLIDE 61

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → 4 ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4 ∧ γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → 4 ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • 2}

{n. n = 4}

slide-62
SLIDE 62

14

Ghost variables in action

{True} let x = ref(0) in {x → 0} {x → 0 ∗ γ1 ֒ →

  • 0 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • 0 ∗ γ2 ֒

  • 0}

allocate ∃n1, n2. x → n1 + n2 ∗ γ1 ֒ →

  • n1 ∗ γ2 ֒

  • n2

{γ1 ֒ →

  • 0 ∗ γ2 ֒

  • 0}

{γ1 ֒ →

  • 0}

{γ2 ֒ →

  • 0}

{γ1 ֒ →

  • 0 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 0 ∗ x → n2 ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

fetchandadd(x, 2) {γ1 ֒ →

  • 0 ∗ x → (2+n2) ∗ γ1 ֒

  • 0 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ x → (2+n2) ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • n2}

{. . .} fetchandadd(x, 2) {. . .} {γ1 ֒ →

  • 2}

{γ2 ֒ →

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → (n1+n2) ∗ γ1 ֒

  • n1 ∗ γ2 ֒

  • n2}

{γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → 4 ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • 2}

! x {n. n = 4 ∧ γ1 ֒ →

  • 2 ∗ γ2 ֒

  • 2 ∗ x → 4 ∗ γ1 ֒

  • 2 ∗ γ2 ֒

  • 2}

{n. n = 4}

slide-63
SLIDE 63

15

Ghost variables with fractional permissions [Boyland]

What if we have n threads? Using n different ghost variables, results in different proofs for each thread. That is not modular. Better way: ghost variables with a fractional permission (0, 1]Q: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2
slide-64
SLIDE 64

15

Ghost variables with fractional permissions [Boyland]

What if we have n threads? Using n different ghost variables, results in different proofs for each thread. That is not modular. Better way: ghost variables with a fractional permission (0, 1]Q: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2

You only get the equality when you have full ownership (π = 1): γ ֒ →

  • n ∗ γ

1

֒ − →

  • m

⇒ n = m

slide-65
SLIDE 65

15

Ghost variables with fractional permissions [Boyland]

What if we have n threads? Using n different ghost variables, results in different proofs for each thread. That is not modular. Better way: ghost variables with a fractional permission (0, 1]Q: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2

You only get the equality when you have full ownership (π = 1): γ ֒ →

  • n ∗ γ

1

֒ − →

  • m

⇒ n = m Updating is possible with partial ownership (0 < π ≤ 1): γ ֒ →

  • n ∗ γ

π

֒ − →

  • m

≡ −

γ ֒ →

  • (n + i) ∗ γ

π

֒ − →

  • (m + i)
slide-66
SLIDE 66

15

Ghost variables with fractional permissions [Boyland]

What if we have n threads? Using n different ghost variables, results in different proofs for each thread. That is not modular. Better way: ghost variables with a fractional permission (0, 1]Q: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2

You only get the equality when you have full ownership (π = 1): γ ֒ →

  • n ∗ γ

1

֒ − →

  • m

⇒ n = m Updating is possible with partial ownership (0 < π ≤ 1): γ ֒ →

  • n ∗ γ

π

֒ − →

  • m

≡ −

γ ֒ →

  • (n + i) ∗ γ

π

֒ − →

  • (m + i)

Keeps the invariant that all γ

πi

֒ − →

  • ni sum up to γ ֒

  • ni
slide-67
SLIDE 67

16

Fractional ghost variables in action

{True} let x = ref(0) in fetchandadd(x, 2) fetchandadd(x, 2) . . . ! x {n. n = 2k}

slide-68
SLIDE 68

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0} fetchandadd(x, 2) fetchandadd(x, 2) . . . ! x {n. n = 2k}

slide-69
SLIDE 69

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • fetchandadd(x, 2)

fetchandadd(x, 2) . . . ! x {n. n = 2k}

slide-70
SLIDE 70

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n

fetchandadd(x, 2) fetchandadd(x, 2) . . . ! x {n. n = 2k}

slide-71
SLIDE 71

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n
  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • fetchandadd(x, 2)

fetchandadd(x, 2) . . .

  • γ

1/ k

֒ − →

  • 2
  • γ

1/ k

֒ − →

  • 2
  • ! x

{n. n = 2k}

slide-72
SLIDE 72

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n
  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • 0 ∗ x → n ∗ γ ֒

  • n
  • fetchandadd(x, 2)

fetchandadd(x, 2) . . .

  • γ

1/ k

֒ − →

  • 2
  • γ

1/ k

֒ − →

  • 2
  • ! x

{n. n = 2k}

slide-73
SLIDE 73

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n
  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • 0 ∗ x → n ∗ γ ֒

  • n
  • fetchandadd(x, 2)
  • γ

1/ k

֒ − →

  • 2 ∗ x → (2+n) ∗ γ1 ֒

  • (2+n)
  • fetchandadd(x, 2)

. . .

  • γ

1/ k

֒ − →

  • 2
  • γ

1/ k

֒ − →

  • 2
  • ! x

{n. n = 2k}

slide-74
SLIDE 74

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n
  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • 0 ∗ x → n ∗ γ ֒

  • n
  • fetchandadd(x, 2)
  • γ

1/ k

֒ − →

  • 2 ∗ x → (2+n) ∗ γ1 ֒

  • (2+n)
  • {. . .}

fetchandadd(x, 2) {. . .} . . .

  • γ

1/ k

֒ − →

  • 2
  • γ

1/ k

֒ − →

  • 2
  • ! x

{n. n = 2k}

slide-75
SLIDE 75

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n
  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • 0 ∗ x → n ∗ γ ֒

  • n
  • fetchandadd(x, 2)
  • γ

1/ k

֒ − →

  • 2 ∗ x → (2+n) ∗ γ1 ֒

  • (2+n)
  • {. . .}

fetchandadd(x, 2) {. . .} . . .

  • γ

1/ k

֒ − →

  • 2
  • γ

1/ k

֒ − →

  • 2
  • γ

1

֒ − →

  • 2k ∗ x → n ∗ γ ֒

  • n
  • ! x

{n. n = 2k}

slide-76
SLIDE 76

16

Fractional ghost variables in action

{True} let x = ref(0) in {x → 0}

  • x → 0 ∗ γ ֒

  • 0 ∗ γ

1

֒ − →

  • allocate ∃n. x → n ∗ γ ֒

  • n
  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • γ

1/ k

֒ − →

  • 0 ∗ x → n ∗ γ ֒

  • n
  • fetchandadd(x, 2)
  • γ

1/ k

֒ − →

  • 2 ∗ x → (2+n) ∗ γ1 ֒

  • (2+n)
  • {. . .}

fetchandadd(x, 2) {. . .} . . .

  • γ

1/ k

֒ − →

  • 2
  • γ

1/ k

֒ − →

  • 2
  • γ

1

֒ − →

  • 2k ∗ x → n ∗ γ ֒

  • n
  • ! x
  • n. n = 2k ∧ γ

1

֒ − →

  • 2k ∗ x → 2k ∗ γ ֒

  • 2k
  • {n. n = 2k}
slide-77
SLIDE 77

17

Part #2: generalizing ownership

[Ralf Jung, David Swasey, Filip Sieczkowski, Kasper Svendsen, Aaron Turon, Lars Birkedal and Derek Dreyer. Iris: Monoids and Invariants as an Orthogonal Basis for Concurrent Reasoning. In POPL’15] [Ralf Jung, Robbert Krebbers, Lars Birkedal and Derek Dreyer. Higher-Order Ghost State. In ICFP’16]

slide-78
SLIDE 78

18

Mechanisms for concurrent reasoning

We have seen so far:

◮ Invariants R N ◮ Ghost variables γ ֒

  • n and γ ֒

  • n

◮ Fractional ghost variables γ ֒

  • n and γ

π

֒ − →

  • n

Where do these mechanisms come from?

slide-79
SLIDE 79

19

There are many CSLs with more powerful mechanisms. . .

Owick i-Gries (1976) CS L (2004) Rely

  • Gua

ra ntee (1983) S AGL (2007) RGS ep (2007) Den y

  • Gua

ra ntee (2009) CAP (2010) Lia ng

  • Feng

(2013) LRG (2009) S CS L (2013) HOCAP (2013) iCAP (2014) Iris (2015) Ca ReS L (2013) FCS L (2014) T a DA (2014) CoLoS L (2015) Gots m a n-a l (2007) HLRG (2010) Borna t-a l (2005) RGS im (2012) GPS (2014) T

  • ta

l-T a DA (2016) Iris 2.0 (2016) FTCS L (2015) Ja cobs

  • Pies

s ens (2011) RS L (2013) LiLi (2016) Bell-a l (2010) Hobor

  • a

l (2008) FS L (2016) Iris 3 .0 (2016)

Picture by Ilya Sergey

slide-80
SLIDE 80

20

. . . and very complicated primitive rules

slide-81
SLIDE 81

21

The Iris story

Owick i-Gries (1976) CS L (2004) Rely

  • Gua

ra ntee (1983) S AGL (2007) RGS ep (2007) Den y

  • Gua

ra ntee (2009) CAP (2010) Lia ng

  • Feng

(2013) LRG (2009) S CS L (2013) HOCAP (2013) iCAP (2014) Iris (2015) Ca ReS L (2013) FCS L (2014) T a DA (2014) CoLoS L (2015) Gots m a n-a l (2007) HLRG (2010) Borna t-a l (2005) RGS im (2012) GPS (2014) T

  • ta

l-T a DA (2016) Iris 2.0 (2016) FTCS L (2015) Ja cobs

  • Pies

s ens (2011) RS L (2013) LiLi (2016) Bell-a l (2010) Hobor

  • a

l (2008) FS L (2016) Iris 3 .0 (2016)

Picture by Ilya Sergey

The Iris story: all of these mechanisms can be encoded using a simple mechanism of resource ownership

slide-82
SLIDE 82

22

Generalizing ownership

All forms of ownership have common properties:

◮ Ownership of different threads can be composed

For example: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2
slide-83
SLIDE 83

22

Generalizing ownership

All forms of ownership have common properties:

◮ Ownership of different threads can be composed

For example: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2

◮ Composition of ownership is associative and commutative

Mirroring that parallel composition and separating conjunction is associative and commutative

slide-84
SLIDE 84

22

Generalizing ownership

All forms of ownership have common properties:

◮ Ownership of different threads can be composed

For example: γ

π1+π2

֒ − − − →

  • (n1 + n2)

⇔ γ

π1

֒ − →

  • n1 ∗ γ

π2

֒ − →

  • n2

◮ Composition of ownership is associative and commutative

Mirroring that parallel composition and separating conjunction is associative and commutative

◮ Combinations of ownership that do not make sense are ruled

  • ut

For example: γ ֒ →

  • 5 ∗ γ

1/ 2

֒ − →

  • 3 ∗ γ

1/ 2

֒ − →

  • 4

⇒ False (because 5 = 3 + 4)

slide-85
SLIDE 85

23

Resource algebras

Resource algebra with carrier M:

◮ Composition (·) : M → M → M ◮ Validity predicate V ⊆ M

Satisfying: a · b = b · a a · (b · c) = (a · b) · c (a · b) ∈ V ⇒ a ∈ V

slide-86
SLIDE 86

23

Resource algebras

Resource algebra with carrier M:

◮ Composition (·) : M → M → M ◮ Validity predicate V ⊆ M

Satisfying: a · b = b · a a · (b · c) = (a · b) · c (a · b) ∈ V ⇒ a ∈ V Iris has ghost variables a : M

γ for each resource algebra M

a ∈ V ≡ −

∗ ∃γ. a

γ

a

γ ∗ b γ ⇔ a · b γ

a

γ ⇒ V(a)

∀af. a · af ∈ V ⇒ b · af ∈ V a

γ ≡

∗ b

γ

slide-87
SLIDE 87

24

Ghost variables revisited

Resource algebra for ghost variables: M • n | ◦ n | ⊥ | •

  • n

V {a = ⊥ | a ∈ M}

  • n · ◦ n′ = ◦ n′ · • n
  • n

if n = n′ ⊥

  • therwise
  • ther combinations ⊥

And define: γ ֒ →

  • n • n

γ

γ ֒ →

  • n ◦ n

γ

slide-88
SLIDE 88

24

Ghost variables revisited

Resource algebra for ghost variables: M • n | ◦ n | ⊥ | •

  • n

V {a = ⊥ | a ∈ M}

  • n · ◦ n′ = ◦ n′ · • n
  • n

if n = n′ ⊥

  • therwise
  • ther combinations ⊥

And define: γ ֒ →

  • n • n

γ

γ ֒ →

  • n ◦ n

γ

The ghost variable rules follow directly from the general rules: True ≡ −

∗ ∃γ. γ ֒

  • n ∗ γ ֒

  • n
slide-89
SLIDE 89

24

Ghost variables revisited

Resource algebra for ghost variables: M • n | ◦ n | ⊥ | •

  • n

V {a = ⊥ | a ∈ M}

  • n · ◦ n′ = ◦ n′ · • n
  • n

if n = n′ ⊥

  • therwise
  • ther combinations ⊥

And define: γ ֒ →

  • n • n

γ

γ ֒ →

  • n ◦ n

γ

The ghost variable rules follow directly from the general rules: True ≡ −

∗ ∃γ. •

  • n

γ ≡

∗ ∃γ. γ ֒

  • n ∗ γ ֒

  • n
slide-90
SLIDE 90

24

Ghost variables revisited

Resource algebra for ghost variables: M • n | ◦ n | ⊥ | •

  • n

V {a = ⊥ | a ∈ M}

  • n · ◦ n′ = ◦ n′ · • n
  • n

if n = n′ ⊥

  • therwise
  • ther combinations ⊥

And define: γ ֒ →

  • n • n

γ

γ ֒ →

  • n ◦ n

γ

The ghost variable rules follow directly from the general rules: True ≡ −

∗ ∃γ. •

  • n

γ ≡

∗ ∃γ. γ ֒

  • n ∗ γ ֒

  • n

γ ֒ →

  • n ∗ γ ֒

  • m ⇒ n = m
slide-91
SLIDE 91

24

Ghost variables revisited

Resource algebra for ghost variables: M • n | ◦ n | ⊥ | •

  • n

V {a = ⊥ | a ∈ M}

  • n · ◦ n′ = ◦ n′ · • n
  • n

if n = n′ ⊥

  • therwise
  • ther combinations ⊥

And define: γ ֒ →

  • n • n

γ

γ ֒ →

  • n ◦ n

γ

The ghost variable rules follow directly from the general rules: True ≡ −

∗ ∃γ. •

  • n

γ ≡

∗ ∃γ. γ ֒

  • n ∗ γ ֒

  • n

γ ֒ →

  • n ∗ γ ֒

  • m ⇒ (• n · ◦ m) ∈ V ⇒ n = m
slide-92
SLIDE 92

25

Updating resources

Resources can be updated using frame-preserving updates: ∀af. a · af ∈ V ⇒ b · af ∈ V a

γ ≡

∗ b

γ

Key idea: a resource can be updated if the update does not invalidate the resources of concurrently-running threads Thread 1 Thread 2 . . . Thread n a1 · a2 · . . . · an ∈ V

  • b1

· a2 · . . . · an ∈ V

slide-93
SLIDE 93

25

Updating resources

Resources can be updated using frame-preserving updates: ∀af. a · af ∈ V ⇒ b · af ∈ V a

γ ≡

∗ b

γ

Key idea: a resource can be updated if the update does not invalidate the resources of concurrently-running threads Thread 1 Thread 2 . . . Thread n a1 · a2 · . . . · an ∈ V

  • b1

· a2 · . . . · an ∈ V The rule γ ֒ →

  • n ∗ γ ֒

  • m ≡

∗ γ ֒

  • n′ ∗ γ ֒

  • n′ follows directly
slide-94
SLIDE 94

26

In the papers

◮ The full definition of a resource algebra (RA) ◮ Combinators (fractions, products, finite maps, agreement,

etc.) to modularly build many RAs

◮ Encoding of state transition systems as RAs ◮ Encoding of a γ in terms of something even simpler ◮ Higher order ghost state: RAs that circularly depend on iProp,

the type of propositions

C
  • n
s i s t e n t * C
  • m
p l e t e * W e l l D
  • c
u m e n t e d * E a s y t
  • R
e u s e * * E v a l u a t e d * P O P L * A r t i f a c t * A E C

Iris: Monoids and Invariants as an Orthogonal Basis for Concurrent Reasoning

Ralf Jung MPI-SWS & Saarland University jung@mpi-sws.org David Swasey MPI-SWS swasey@mpi-sws.org Filip Sieczkowski Aarhus University filips@cs.au.dk Kasper Svendsen Aarhus University ksvendsen@cs.au.dk Aaron Turon Mozilla Research aturon@mozilla.com Lars Birkedal Aarhus University birkedal@cs.au.dk Derek Dreyer MPI-SWS dreyer@mpi-sws.org Abstract

We present Iris, a concurrent separation logic with a simple premise: monoids and invariants are all you need. Partial commutative monoids enable us to express—and invariants enable us to enforce— user-defined protocols on shared state, which are at the conceptual core of most recent program logics for concurrency. Furthermore, through a novel extension of the concept of a view shift, Iris supports the encoding of logically atomic specifications, i.e., Hoare-style TaDA [8], and others. In this paper, we present a logic called Iris that explains some of the complexities of these prior separation logics in terms of a simpler unifying foundation, while also supporting some new and powerful reasoning principles for concurrency. Before we get to Iris, however, let us begin with a brief overview
  • f some key problems that arise in reasoning compositionally about
shared state, and how prior approaches have dealt with them. 1.1 Invariants and their limitations

Higher-Order Ghost State

Ralf Jung MPI-SWS, Germany jung@mpi-sws.org Robbert Krebbers Aarhus University, Denmark mail@robbertkrebbers.nl Lars Birkedal Aarhus University, Denmark birkedal@cs.au.dk Derek Dreyer MPI-SWS, Germany dreyer@mpi-sws.org Abstract

The development of concurrent separation logic (CSL) has sparked a long line of work on modular verification of sophisticated concurrent
  • programs. Two of the most important features supported by several
existing extensions to CSL are higher-order quantification and custom ghost state. However, none of the logics that support both
  • f these features reap the full potential of their combination. In
particular, none of them provide general support for a feature we dub “higher-order ghost state”: the ability to store arbitrary higher-
  • rder separation-logic predicates in ghost variables.
In this paper, we propose higher-order ghost state as a interesting and useful extension to CSL, which we formalize in the framework
  • f Jung et al.’s recently developed Iris logic. To justify its soundness,
we develop a novel algebraic structure called CMRAs (“cameras”), which can be thought of as “step-indexed partial commutative were tied to a “conditional critical region” construct for synchro-
  • nization. Since O’Hearn’s pioneering (and Gödel-award-winning)
paper, there has been an avalanche of follow-on work extending CSL with more sophisticated mechanisms for modular reasoning, which allow shared state to be accessed at a finer granularity (e.g., atomic compare-and-swap instructions) and which support the ver- ification of more “daring” (less clearly synchronized) concurrent programs [40, 17, 16, 13, 18, 38, 35, 27, 11, 24]. In this paper, we focus on two of the most important extensions to CSL—higher-order quantification and custom ghost state—and
  • bserve that, although several logics support both of these exten-
sions, none of them reap the full potential of their combination. In particular, none of them provide general support for a feature we dub “higher-order ghost state”. Higher-order quantification is the ability to quantify logical assertions (universally and existentially) over other assertions and,
slide-95
SLIDE 95

27

Part #3: encoding Hoare triples

[Robbert Krebbers, Ralf Jung, Aleˇ s Bizjak, Jacques-Henri Jourdan, Derek Dreyer, and Lars Birkedal. The Essence of Higher-Order Concurrent Separation Logic. In ESOP’17]

slide-96
SLIDE 96

28

Encoding Hoare triples

Step 1: define Hoare triple in terms of weakest preconditions:

{P} e {w. Q} (P −

∗ wp e {w. Q}) where wp e {w. Q} gives the weakest precondition under which:

◮ all executions of e are safe ◮ all return values v of e satisfy the postcondition Q[v/w]

slide-97
SLIDE 97

28

Encoding Hoare triples

Step 1: define Hoare triple in terms of weakest preconditions:

{P} e {w. Q} (P −

∗ wp e {w. Q}) where wp e {w. Q} gives the weakest precondition under which:

◮ all executions of e are safe ◮ all return values v of e satisfy the postcondition Q[v/w]

Step 2: define weakest precondition: wp e {w. Q}              Q[e/w] if e ∈ Val ∀σ. red(e, σ) ∧ ⊲ (∀e2, σ2. (e, σ) → (e2, σ2, ǫ) − ∗ wp e2 {w. Q}) if e ∈ Val Recursive occurrence guarded by a later ⊲

slide-98
SLIDE 98

29

Adding the points-to connective

How to connect the states to ℓ − → v? wp e {w. Q}                  Q[e/w] if e ∈ Val ∀σ. red(e, σ) ∧ ⊲ (∀e2, σ2. (e, σ) → (e2, σ2, ǫ) − ∗ wp e2 {w. Q}) if e ∈ Val ℓ − → v ???

slide-99
SLIDE 99

29

Adding the points-to connective

How to connect the states to ℓ − → v? wp e {w. Q}                  Q[e/w] if e ∈ Val ∀σ. red(e, σ) ∧ ⊲ (∀e2, σ2. (e, σ) → (e2, σ2, ǫ) − ∗ wp e2 {w. Q}) if e ∈ Val ℓ − → v ??? Solution: ghost variables

slide-100
SLIDE 100

29

Adding the points-to connective

How to connect the states to ℓ − → v? wp e {w. Q}                  | ⇛Q[e/w] if e ∈ Val ∀σ. • σ

γ −

∗ | ⇛ red(e, σ) ∧ ⊲ (∀e2, σ2. (e, σ) → (e2, σ2, ǫ) − ∗ | ⇛

  • σ2

γ ∗ wp e2 {w. Q})

if e ∈ Val ℓ − → v ◦ [ℓ := v]

γ

Solution: ghost variables Using an appropriate resource algebra we can obtain:

  • σ

γ ∗ ◦ [ℓ := w] γ ⇒ σ(ℓ) = w

  • σ

γ ∗ ◦ [ℓ := v] γ ≡

∗ • σ[ℓ := w]

γ ∗ ◦ [ℓ := w] γ

  • σ

γ ≡

∗ • σ[ℓ := w]

γ ∗ ◦ [ℓ := w] γ

if ℓ / ∈ dom(σ)

slide-101
SLIDE 101

30

The update modality

The update modality | ⇛ internalizes frame-preserving updates: ∀af. a · af ∈ V ⇒ b · af ∈ V a

γ −

∗ | ⇛ b

γ

(We often write P ≡ −

∗ Q for P −

∗ | ⇛Q) It has the following set of primitive rules: P − ∗ Q | ⇛P − ∗ | ⇛Q P | ⇛P | ⇛| ⇛P | ⇛P Q ∗ | ⇛P | ⇛(Q ∗ P)

slide-102
SLIDE 102

30

The update modality

The update modality | ⇛ internalizes frame-preserving updates: ∀af. a · af ∈ V ⇒ b · af ∈ V a

γ −

∗ | ⇛ b

γ

(We often write P ≡ −

∗ Q for P −

∗ | ⇛Q) It has the following set of primitive rules: P − ∗ Q | ⇛P − ∗ | ⇛Q P | ⇛P | ⇛| ⇛P | ⇛P Q ∗ | ⇛P | ⇛(Q ∗ P) From the definition of weakest preconditions we can derive: | ⇛wp e {w. | ⇛Q} wp e {w. Q} That lets us update resources around weakest preconditions

slide-103
SLIDE 103

31

Adding fork

wp e {w. Q}                      | ⇛Q[e/w] if e ∈ Val ∀σ. • σ

γ −

∗ | ⇛ red(e, σ) ∧ ⊲ (∀e2, σ2, ef . (e, σ) → (e2, σ2, ef ) − ∗ | ⇛

  • σ2

γ ∗ wp e2 {w. Q} ∗

∗e′∈

ef wp e′ {w.True})

if e ∈ Val ℓ − → v ◦ [ℓ := v]

γ

Key point: separating conjunction ensures that resources are subdivided between threads without explicit disjointness

slide-104
SLIDE 104

32

In the paper

◮ Encoding of invariants P N using higher-order ghost state ◮ All about the modalities , ⊲ and |

◮ Adequacy of weakest preconditions ◮ Paradox showing that ⊲ is ‘needed’ for impredicative invariants

The Essence of Higher-Order Concurrent Separation Logic

Robbert Krebbers1, Ralf Jung2, Aleˇ s Bizjak3, Jacques-Henri Jourdan2, Derek Dreyer2, and Lars Birkedal3

1 Delft University of Technology, The Netherlands 2 Max Planck Institute for Software Systems (MPI-SWS), Germany 3 Aarhus University, Denmark

  • Abstract. Concurrent separation logics (CSLs) have come of age, and

with age they have accumulated a great deal of complexity. Previous work on the Iris logic attempted to reduce the complex logical mecha- nisms of modern CSLs to two orthogonal concepts: partial commutative monoids (PCMs) and invariants. However, the realization of these con-

slide-105
SLIDE 105

33

Part #4: Iris Proof Mode (IPM) in Coq

[Robbert Krebbers, Amin Timany, and Lars Birkedal. Interactive proofs in higher-order concurrent separation logic. In POPL’17]

slide-106
SLIDE 106

34

IPM: Iris proof mode

◮ Coq with (spatial and non-spatial)

named proof contexts for Iris

◮ Tactics for introduction and

elimination of all Iris connectives

◮ Entirely implemented using

reflection, type classes and Ltac

slide-107
SLIDE 107

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp (1/1) P ∗ (∃ a : A, Ψ a) ∗ R − ∗ ∃ a : A, Ψ a ∗ P

slide-108
SLIDE 108

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp (1/1) "HP" : P "HΨ" : ∃ a : A, Ψ a "HR" : R − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − ∗ ∃ a : A, Ψ a ∗ P

slide-109
SLIDE 109

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". iDestruct "HΨ" as (x) "H". 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp x : A (1/1) "HP" : P "HΨ" : Ψ x "HR" : R − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − ∗ ∃ a : A, Ψ a ∗ P

slide-110
SLIDE 110

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". iDestruct "HΨ" as (x) "H". iExists x. 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp x : A (1/1) "HP" : P "HΨ" : Ψ x "HR" : R − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − ∗ Ψ x ∗ P

slide-111
SLIDE 111

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". iDestruct "HΨ" as (x) "H". iExists x. iFrame "HΨ". 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp x : A (1/1) "HP" : P "HR" : R − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − ∗ P

slide-112
SLIDE 112

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". iDestruct "HΨ" as (x) "H". iExists x. iFrame "HΨ". done. No more subgoals.

slide-113
SLIDE 113

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". iDestruct "HΨ" as (x) "H". iExists x. iFrame "HΨ". done. Qed.

slide-114
SLIDE 114

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp (1/1) "HP" : P "HΨ" : ∃ a : A, Ψ a "HR" : R − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − ∗ ∃ a : A, Ψ a ∗ P

Logical notations overridden in scope for Iris

slide-115
SLIDE 115

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". 1 subgoal M : ucmraT A : Type P, R : iProp Ψ : A → iProp (1/1) "HP" : P "HΨ" : ∃ a : A, Ψ a "HR" : R − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − − ∗ ∃ a : A, Ψ a ∗ P

Notation for deeply embedded context

slide-116
SLIDE 116

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]". Unset Printing Notations. 1 subgoal M : ucmraT A : Type@{Top.105} P, R : uPred M Ψ : forall : A, uPred M (1/1) @uPred entails M (@coq tactics.of envs M (@coq tactics.Envs M (@Enil (uPred M)) (@Esnoc (uPred M) (@Esnoc (uPred M) (@Esnoc (uPred M) (@Enil (uPred M)) (String (Ascii false false false true false false true false) (String (Ascii false false false false true false true false) EmptyString)) P) (String (Ascii false false false true false false true false) (String (Ascii false true true true false false true true) (String (Ascii false false false true false true false true) EmptyString)))

slide-117
SLIDE 117

35

IPM demo

Lemma and exist sep {A} P R (Ψ: A → iProp) : P ∗ (∃ a, Ψ a) ∗ R − ∗ ∃ a, Ψ a ∗ P. Proof. iIntros "[HP [HΨ HR]]".

Introduction patterns represented as strings

slide-118
SLIDE 118

36

The setup of IPM in a nutshell

◮ Deep embedding of contexts as association lists: Record envs := Envs { env persistent : env iProp; env spatial : env iProp }. Coercion of envs (∆ : envs) : iProp := ( envs wf ∆ ∗ [∗] env persistent ∆ ∗ [∗] env spatial ∆)% I.

slide-119
SLIDE 119

36

The setup of IPM in a nutshell

◮ Deep embedding of contexts as association lists: Record envs := Envs { env persistent : env iProp; env spatial : env iProp }. Coercion of envs (∆ : envs) : iProp := ( envs wf ∆ ∗ [∗] env persistent ∆ ∗ [∗] env spatial ∆)% I.

Propositions that enjoy P ⇔ P ∗ P

slide-120
SLIDE 120

36

The setup of IPM in a nutshell

◮ Deep embedding of contexts as association lists: Record envs := Envs { env persistent : env iProp; env spatial : env iProp }. Coercion of envs (∆ : envs) : iProp := ( envs wf ∆ ∗ [∗] env persistent ∆ ∗ [∗] env spatial ∆)% I.

Propositions that enjoy P ⇔ P ∗ P

◮ Tactics implemented by reflection: Lemma tac sep split ∆ ∆1 ∆2 lr js Q1 Q2 : envs split lr js ∆ = Some (∆1,∆2) → (∆1 ⊢ Q1) → (∆2 ⊢ Q2) → ∆ ⊢ Q1 ∗ Q2.

slide-121
SLIDE 121

36

The setup of IPM in a nutshell

◮ Deep embedding of contexts as association lists: Record envs := Envs { env persistent : env iProp; env spatial : env iProp }. Coercion of envs (∆ : envs) : iProp := ( envs wf ∆ ∗ [∗] env persistent ∆ ∗ [∗] env spatial ∆)% I.

Propositions that enjoy P ⇔ P ∗ P

◮ Tactics implemented by reflection: Lemma tac sep split ∆ ∆1 ∆2 lr js Q1 Q2 : envs split lr js ∆ = Some (∆1,∆2) → (∆1 ⊢ Q1) → (∆2 ⊢ Q2) → ∆ ⊢ Q1 ∗ Q2.

Context splitting implemented in Gallina

slide-122
SLIDE 122

36

The setup of IPM in a nutshell

◮ Deep embedding of contexts as association lists: Record envs := Envs { env persistent : env iProp; env spatial : env iProp }. Coercion of envs (∆ : envs) : iProp := ( envs wf ∆ ∗ [∗] env persistent ∆ ∗ [∗] env spatial ∆)% I.

Propositions that enjoy P ⇔ P ∗ P

◮ Tactics implemented by reflection: Lemma tac sep split ∆ ∆1 ∆2 lr js Q1 Q2 : envs split lr js ∆ = Some (∆1,∆2) → (∆1 ⊢ Q1) → (∆2 ⊢ Q2) → ∆ ⊢ Q1 ∗ Q2.

Context splitting implemented in Gallina

◮ Ltac wrappers around reflective tactics: Tactic Notation "iSplitL" constr(Hs) := let Hs := words Hs in eapply tac sep split with false Hs ; [env cbv; reflexivity| (*goal 1 *) | (*goal 2*) ] .

slide-123
SLIDE 123

37

In the paper

◮ Framing, later stripping, . . . using type classes ◮ Modular IPM tactics using type classes ◮ Tactics for symbolic execution ◮ Verification of concurrent algorithms using IPM ◮ Formalization of unary and binary logical relations ◮ Proving logical refinements

C
  • n
s i s t e n t * C
  • m
p l e t e * W e l l D
  • c
u m e n t e d * E a s y t
  • R
e u s e * * E v a l u a t e d * P O P L * A r t i f a c t * A E C

Interactive Proofs in Higher-Order Concurrent Separation Logic

Robbert Krebbers ∗

Delft University of Technology, The Netherlands mail@robbertkrebbers.nl

Amin Timany

imec-Distrinet, KU Leuven, Belgium amin.timany@cs.kuleuven.be

Lars Birkedal

Aarhus University, Denmark birkedal@cs.au.dk

Abstract

When using a proof assistant to reason in an embedded logic – like separation logic – one cannot benefit from the proof contexts and basic tactics of the proof assistant. This results in proofs that are at a too low level of abstraction because they are cluttered with bookkeeping code related to manipulating the object logic. In this paper, we introduce a so-called proof mode that extends instance, they include separating conjunction of separation logic for reasoning about mutable data structures, invariants for reasoning about sharing, guarded recursion for reasoning about various forms

  • f recursion, and higher-order quantification for giving generic

modular specifications to libraries. Due to these built-in features, modern program logics are very different from the logics of general purpose proof assistants. There- fore, to use a proof assistant to formalize reasoning in a program

slide-124
SLIDE 124

38

Thank you!

Download Iris at http://iris-project.org/

Talks about Iris this week:

◮ Wed 15:10 @ POPL: Krebbers, Timany and Birkedal

Interactive Proofs in Higher-Order Concurrent Separation Logic

◮ Wed 15:35 @ POPL: Krogh-Jespersen, Svendsen and Birkedal

A Relational Model of Types-and-Effects in Higher-Order Concurrent Separation Logic

◮ Sat 9:00 @ CoqPL: Krebbers

Demo and implementation of Iris in Coq

◮ Sat 10:30 @ CoqPL: Timany, Krebbers and Birkedal

Logical Relations in Iris