Concurrency Theory vs Concurrent Languages Silvia Crafa Universita - - PowerPoint PPT Presentation

concurrency theory vs concurrent languages
SMART_READER_LITE
LIVE PREVIEW

Concurrency Theory vs Concurrent Languages Silvia Crafa Universita - - PowerPoint PPT Presentation

Concurrency Theory vs Concurrent Languages Silvia Crafa Universita di Padova Bertinoro, OPCT 2014 Bisimulation inside Concurrency Theory vs Concurrent Languages Silvia Crafa Universita di Padova Bertinoro, OPCT 2014


slide-1
SLIDE 1

Concurrency Theory vs Concurrent Languages

Silvia Crafa

Universita’ di Padova Bertinoro, OPCT 2014

slide-2
SLIDE 2

Concurrency Theory vs Concurrent Languages

Silvia Crafa

Universita’ di Padova Bertinoro, OPCT 2014

Bisimulation inside

slide-3
SLIDE 3

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

slide-4
SLIDE 4

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Add structure to the code

slide-5
SLIDE 5

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Add structure to the code OOP to handle industrial complex software systems

Encapsulation/Modularity Interfaces/Code Reuse

slide-6
SLIDE 6

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Add structure to the code OOP to handle industrial complex software systems

Encapsulation/Modularity Interfaces/Code Reuse

INTERNET

slide-7
SLIDE 7

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Add structure to the code OOP to handle industrial complex software systems

Encapsulation/Modularity Interfaces/Code Reuse

INTERNET

less Efficiency more Portability Security (Types) GUIs and IDEs

slide-8
SLIDE 8

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Add structure to the code OOP to handle industrial complex software systems

Encapsulation/Modularity Interfaces/Code Reuse

INTERNET

less Efficiency more Portability Security (Types) GUIs and IDEs Productivity Types are burdensome

slide-9
SLIDE 9

The Quest for good Abstractions

✤ When a language has been invented VS when became popular? ✤ Why has been invented VS why became popular?

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Add structure to the code OOP to handle industrial complex software systems

Encapsulation/Modularity Interfaces/Code Reuse

INTERNET

less Efficiency more Portability Security (Types) GUIs and IDEs

CONCURRENCY

Productivity Types are burdensome

slide-10
SLIDE 10

The Quest for good Abstractions

✤ Changes need a catalyser Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

CONCURRENCY INTERNET

slide-11
SLIDE 11

The Quest for good Abstractions

✤ Changes need a catalyser Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

CONCURRENCY INTERNET

✤ new hardware can only be parallel ✤ new software must be concurrent

slide-12
SLIDE 12

The Quest for good Abstractions

✤ Changes need a catalyser Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Popular Parallel Programming Grand Challenge

CONCURRENCY INTERNET

✤ new hardware can only be parallel ✤ new software must be concurrent

slide-13
SLIDE 13

How hard is Concurrent Programming?

✤ (correct) concurrent programming is difficult ✤ Adding concurrency to sequential code is

even harder

Intrinsic reasons

nondeterminism

Accidental reasons

improper programming model

slide-14
SLIDE 14

How hard is Concurrent Programming?

✤ (correct) concurrent programming is difficult ✤ Adding concurrency to sequential code is

even harder

Intrinsic reasons

nondeterminism

Accidental reasons

improper programming model

Think concurrently (Concurrent Algorithm) Translate into a concurrent code

slide-15
SLIDE 15

How hard is Concurrent Programming?

✤ (correct) concurrent programming is difficult ✤ Adding concurrency to sequential code is

even harder

Intrinsic reasons

nondeterminism

Accidental reasons

improper programming model

Think concurrently (Concurrent Algorithm) Translate into a concurrent code

DESIGN of concurrent language

slide-16
SLIDE 16

How hard is Concurrent Programming?

✤ (correct) concurrent programming is difficult ✤ Adding concurrency to sequential code is

even harder

Intrinsic reasons

nondeterminism

Accidental reasons

improper programming model

High-level Concurrency Abstraction

Think concurrently (Concurrent Algorithm) Translate into a concurrent code

DESIGN of concurrent language

slide-17
SLIDE 17

Expressiveness Performance Easy to think Easy to reason about

The Quest for good Abstractions

slide-18
SLIDE 18

✤ OOP

encapsulation

memory management

multiple inheritance

Expressiveness Performance Easy to think Easy to reason about

The Quest for good Abstractions

slide-19
SLIDE 19

✤ OOP

encapsulation

memory management

multiple inheritance

Expressiveness Performance Easy to think Easy to reason about

C++ —> Java —> Scala

The Quest for good Abstractions

slide-20
SLIDE 20

✤ OOP

encapsulation

memory management

multiple inheritance

Expressiveness Performance Easy to think Easy to reason about

C++ —> Java —> Scala

✤ Types

documentation vs verbosity

C++ —> Java —> Ruby —>Scala

The Quest for good Abstractions

slide-21
SLIDE 21

✤ OOP

encapsulation

memory management

multiple inheritance

Expressiveness Performance Easy to think Easy to reason about

C++ —> Java —> Scala

✤ Types

documentation vs verbosity

C++ —> Java —> Ruby —>Scala

✤ Functional Programming

composing and passing behaviours

sometimes imperative style is easier to reason about

C#—> Scala C++11, Java8

The Quest for good Abstractions

slide-22
SLIDE 22

✤ OOP

Expressiveness Performance Easy to think Easy to reason about

✤ Types ✤ Functional Programming

The Quest for good Abstractions

which abstractions interoperate productively?

slide-23
SLIDE 23

Concurrency Abstractions?

Many Concurreny Models…

The Quest for good Abstractions

slide-24
SLIDE 24

Concurrency Abstractions?

Many Concurreny Models…

✤ Shared Memory Model and “Java Threads”

The Quest for good Abstractions

slide-25
SLIDE 25

Concurrency Abstractions?

Many Concurreny Models…

✤ Shared Memory Model and “Java Threads”

synchronized(lock) lock.wait() lock.notify() atomic {…} when(cond){…} async{} finish{}

Java STM X10

The Quest for good Abstractions

slide-26
SLIDE 26

Concurrency Abstractions?

Many Concurreny Models…

✤ Shared Memory Model and “Java Threads”

synchronized(lock) lock.wait() lock.notify() atomic {…} when(cond){…} async{} finish{}

Java STM X10

The Quest for good Abstractions

logical threads distinguished from executors

Scalability!

(activities/tasks) (pool of thread workers)

slide-27
SLIDE 27

Concurrency Abstractions?

Many Concurreny Models…

✤ Shared Memory Model and “Java Threads”

new Thread().start()

JVM thread Lightweight threads in the program Pool of Executors in the runtime

synchronized(lock) lock.wait() lock.notify() atomic {…} when(cond){…} async{} finish{}

Java STM X10

The Quest for good Abstractions

logical threads distinguished from executors

Scalability!

(activities/tasks) (pool of thread workers)

slide-28
SLIDE 28

Many Concurrency Models

✤ GPU Concurrency Model

Massive data parallelism

integration with high-level concurrent language (X10, Nova, Scala heterogeneous compiler)

✤ Shared Memory

is very natural for “centralised algorithms” and components operating on shared data

is error-prone when the sole purpose of SM is thread communication

✤ Message Passing Model

It is the message that carries the state!

Channel based: Google’s GO

Actor Model: Erlang, Scala. It fits well both OOP and FP

Sessions

slide-29
SLIDE 29

Many Concurrency Models

✤ GPU Concurrency Model ✤ Shared Memory ✤ Message Passing Model

which abstractions interoperate productively?

slide-30
SLIDE 30

The Quest for good Abstractions

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

CONCURRENCY INTERNET

✤ New catalyser:

slide-31
SLIDE 31

The Quest for good Abstractions

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

CONCURRENCY INTERNET

✤ multicore —> concurrent programming ✤ cloud computing —> distributed programming

DISTRIBUTION

✤ New catalyser:

slide-32
SLIDE 32

The Quest for good Abstractions

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

Reactive Programming

CONCURRENCY INTERNET

✤ multicore —> concurrent programming ✤ cloud computing —> distributed programming

DISTRIBUTION

✤ New catalyser:

slide-33
SLIDE 33

Reactive Programming

✤ react to events

  • ✤ react lo load
  • ✤ react to failures
slide-34
SLIDE 34

Reactive Programming

✤ react to events

  • ✤ react lo load
  • ✤ react to failures

✤ futures ✤ push data to consumers when

available rather than polling

instead of issuing a command that asks for a change, react to an event that indicates that something has changed

✤ event - driven ✤ asynchronous

slide-35
SLIDE 35

Reactive Programming

✤ react to events

  • ✤ react lo load
  • ✤ react to failures

✤ futures ✤ push data to consumers when

available rather than polling

instead of issuing a command that asks for a change, react to an event that indicates that something has changed

✤ event - driven ✤ asynchronous ✤ scalability

up/down +/- CPU nodes

in/out +/- server

slide-36
SLIDE 36

Reactive Programming

✤ react to events

  • ✤ react lo load
  • ✤ react to failures

✤ futures ✤ push data to consumers when

available rather than polling

instead of issuing a command that asks for a change, react to an event that indicates that something has changed

✤ event - driven ✤ asynchronous ✤ resiliency ✤ scalability

up/down +/- CPU nodes

in/out +/- server

slide-37
SLIDE 37

The Quest for good Abstractions

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

CONCURRENCY INTERNET

✤ multicore —> concurrent programming ✤ cloud computing —> distributed programming

DISTRIBUTION

✤ New catalyser:

slide-38
SLIDE 38

The Quest for good Abstractions

Fortran Lisp Cobol Pascal C

ML

C++ Haskell

Java JavaScript Ruby Python X10

Scala Go C#

PHP

CONCURRENCY INTERNET

✤ multicore —> concurrent programming ✤ cloud computing —> distributed programming

DISTRIBUTION

✤ New catalyser:

✤ big data application —> High Performance Computing

BIG DATA

slide-39
SLIDE 39

High Performance Computing

✤ scale-out on massively parallel hardware

high-performance computing on supercomputers

analytic computations on big data

✤ a single program

runs on a collection of places on a cluster of computers

can create global data-structures spanning multiple places

can spawn tasks at remote places, detecting termination of arbitrary trees of spawned tasks

slide-40
SLIDE 40

✤ Big Data Application Framework ✤ Map - Reduce Model ✤ Bulk Synchronous Parallel Model

High Performance Computing

✤ scale-out on massively parallel hardware

high-performance computing on supercomputers

analytic computations on big data

✤ a single program

runs on a collection of places on a cluster of computers

can create global data-structures spanning multiple places

can spawn tasks at remote places, detecting termination of arbitrary trees of spawned tasks

slide-41
SLIDE 41

✤ Big Data Application Framework ✤ Map - Reduce Model ✤ Bulk Synchronous Parallel Model

High Performance Computing

✤ scale-out on massively parallel hardware

high-performance computing on supercomputers

analytic computations on big data

✤ a single program

runs on a collection of places on a cluster of computers

can create global data-structures spanning multiple places

can spawn tasks at remote places, detecting termination of arbitrary trees of spawned tasks

“Concurrent Patterns” with their distinctive abstractions

slide-42
SLIDE 42

What about Theory ?

The X10 experience

slide-43
SLIDE 43

The X10 programming language

✤ open-source language for HPC programming

  • ✤ key design features:

✤ scaling: code running on 100 - 10.000 multicore nodes (up to

50millions core)

✤ productivity: high level abstractions (Java-like, Scala-like) +

typing (constrained dependent types as contracts).

✤ performance on heterogeneous hardware: it compiles to Java,

to C++, to CUDA. Resilient extension

✤ concurrent abstractions: place-centric, asynchronous computing

slide-44
SLIDE 44

The X10 programming language

// double in parallel all the array elements val a:Array[Int]= … for(i in 0..(a.size-1)) async { a(i)*=2 } println (“The End”)

Spawns an asynchronous lightweight activity running in parallel

slide-45
SLIDE 45

The X10 programming language

// double in parallel all the array elements val a:Array[Int]= … for(i in 0..(a.size-1)) async { a(i)*=2 } println (“The End”)

Spawns an asynchronous lightweight activity running in parallel waits for the termination

  • f all the spawned activities

finish

slide-46
SLIDE 46

The X10 programming language

// double in parallel all the array elements val a:Array[Int]= … var b=0 finish for(i in 0..(a.size-1)) async { a(i)*=2 atomic { b=b+a(i) } } println (“The End”)

STM when(cond) s clocks

slide-47
SLIDE 47

The X10 programming language

class HelloWholeWorld { public static def main(args:Rail[String]) { finish for (p in Place.places()) async at(p) Console.OUT.printnl(“Hello from place “+p) Console.OUT.printnl(“Hello from everywhere”) }

slide-48
SLIDE 48

The X10 programming language

class HelloWholeWorld { public static def main(args:Rail[String]) { finish for (p in Place.places()) async at(p) Console.OUT.printnl(“Hello from place “+p) Console.OUT.printnl(“Hello from everywhere”) }

%X10_NPLACES=4 Hello from place 1 Hello from place 2 Hello from place 0 Hello from place 3 Hello from everywhere

slide-49
SLIDE 49

The X10 programming language

class HelloWholeWorld { public static def main(args:Rail[String]) { finish for (p in Place.places()) async at(p) Console.OUT.printnl(“Hello from place “+p) Console.OUT.printnl(“Hello from everywhere”) }

%X10_NPLACES=4 Hello from place 1 Hello from place 2 Hello from place 0 Hello from place 3 Hello from everywhere

@CUDA

slide-50
SLIDE 50

Async Partitioned Global Address Space

✤ A global address space is divided into multiple places (=computing nodes) ✤ Each place can contain activities and objects ✤ An object belongs to a specific place, but can be remotely referenced ✤ DistArray is a data structure whose elements are scattered over multiple places

DistArray DistArray Immutable data, class, struct, function

async at a t at async async

Activity Object Address space Place 0 Place MAX_PLACES-1 ... Place 1 Remote ref

slide-51
SLIDE 51

Resilient X10: if a node fails….

✤ it is relatively easy to localize the impact of place death ✤ Objects in other places are still alive, but remote references become inaccessible ✤ Execution continues using the remaining nodes ✤ Happens Before Relation between remaining statements is preserved (HB

Invariance) – no new race conditions, or sequentialization induced by failure.

DistArray DistArray Immutable data, class, struct, function

async at a t at async async

Activity Object Address space Place 0 Place MAX_PLACES-1 ... Place 1 Remote ref

slide-52
SLIDE 52

Resilient X10: if a node fails….

✤ it is relatively easy to localize the impact of place death ✤ Objects in other places are still alive, but remote references become inaccessible ✤ Execution continues using the remaining nodes ✤ Happens Before Relation between remaining statements is preserved (HB

Invariance) – no new race conditions, or sequentialization induced by failure.

DistArray DistArray Immutable data, class, struct, function

async at a t at async async

Activity Object Address space Place 0 Place MAX_PLACES-1 ... Place 1 Remote ref

  • finish async at atomic clock

local/global references place failures

can be mixed in any way

SEMANTICS !!

slide-53
SLIDE 53

TX10

  • bject id

global object id

exception error propagation and handling

Semantics of (Resilient) X10 [ECOOP 2014] S.Crafa, D.Cunningham, V.Saraswat, A.Shinnar, O.Tardieu

Values v ::=

  • | o$p | E | DPE

Expressions e ::= v | x | e.f | {f:e, . . . , f:e} | globalref e | valof e Statements s ::= skip; | throw v | val x = e s | e.f = e; | {s t} at(p)val x = e in s | async s | finish s | try s catch t at(p) s | async s | finishµ s Configurations k ::= hs, gi | g Local heap h ::= ; | h · [o 7! ( ˜ fi : ˜ vi)] Global heap g ::= ; | g · [p 7! h]

slide-54
SLIDE 54

Semantics of (Resilient) X10

Small-step transition system, mechanised in Coq

non in ChemicalAM style (better fits the centralised view of the distributed program) hs, gi

E⌦

  • !p hs0, g0i | g0

hs, gi

E⇥

  • !p hs0, g0i | g0

hs, gi !p hs0, g0i | g0

slide-55
SLIDE 55

Semantics of (Resilient) X10

Small-step transition system, mechanised in Coq

non in ChemicalAM style (better fits the centralised view of the distributed program) hs, gi

E⌦

  • !p hs0, g0i | g0

hs, gi

E⇥

  • !p hs0, g0i | g0

hs, gi !p hs0, g0i | g0 Async failures arise in parallel threads and are caught by the inner finish waiting for their termination finish {async throw E async s2}

slide-56
SLIDE 56

Semantics of (Resilient) X10

Small-step transition system, mechanised in Coq

non in ChemicalAM style (better fits the centralised view of the distributed program) hs, gi

E⌦

  • !p hs0, g0i | g0

hs, gi

E⇥

  • !p hs0, g0i | g0

hs, gi !p hs0, g0i | g0 Async failures arise in parallel threads and are caught by the inner finish waiting for their termination finish {async throw E async s2} Synch failures lead to the failure of any sync continuation leaving async (remote) running code free to terminate {async at(p)s1 throw E s2}

slide-57
SLIDE 57

Semantics of (Resilient) X10

Small-step transition system, mechanised in Coq

non in ChemicalAM style (better fits the centralised view of the distributed program) hs, gi

E⌦

  • !p hs0, g0i | g0

hs, gi

E⇥

  • !p hs0, g0i | g0

hs, gi !p hs0, g0i | g0 Async failures arise in parallel threads and are caught by the inner finish waiting for their termination finish {async throw E async s2} Synch failures lead to the failure of any sync continuation leaving async (remote) running code free to terminate {async at(p)s1 throw E s2}

Proved in Coq Proved in Coq

slide-58
SLIDE 58

Semantics of (Resilient) X10

Small-step transition system, mechanised in Coq

non in ChemicalAM style (better fits the centralised view of the distributed program) hs, gi

E⌦

  • !p hs0, g0i | g0

hs, gi

E⇥

  • !p hs0, g0i | g0

hs, gi !p hs0, g0i | g0

Proved in Coq

Absence of stuck states (the proof can be run, yielding an interpreter for TX10)

slide-59
SLIDE 59

Semantics of Resilient X10

smoothly scales to node failure, with

global heap is a partial map: dom(g) collects non failed places

executing a statement at failed place results in a DPE

place shift at failed place results in a DPE

remote exceptions flow back at the remaining finish masked as DPE p ∈ dom(g) hs, gi !p hs, g \ {(p, g(p)}i

(Place Failure)

p / ∈ dom(g)

hskip, gi

DPE⊗

  • !p g

hat(p) s, gi

DPE⊗

  • !q g

hasync s, gi

DPE⊗

  • !p g

contextual rules modified accordingly

slide-60
SLIDE 60

Semantics of Resilient X10

✤ Happens Before Invariance ✤ failure of place q does not alter the happens before relationship

between statement instances at places other than q

at(0) { at(p) finish at(q) async s1 s2} at(0) finish { at(p){at(q) async s1} s2}

s2 runs at 0 after s1 s2 runs at 0 in parallel with s1

slide-61
SLIDE 61

Semantics of Resilient X10

✤ Happens Before Invariance ✤ failure of place q does not alter the happens before relationship

between statement instances at places other than q

at(0) { at(p) finish at(q) async s1 s2} at(0) finish { at(p){at(q) async s1} s2}

s2 runs at 0 after s1 s2 runs at 0 in parallel with s1

p fails while s1 is running at q

slide-62
SLIDE 62

Semantics of Resilient X10

✤ Happens Before Invariance ✤ failure of place q does not alter the happens before relationship

between statement instances at places other than q

at(0) { at(p) finish at(q) async s1 s2} at(0) finish { at(p){at(q) async s1} s2}

s2 runs at 0 after s1 s2 runs at 0 in parallel with s1

same behaviour! p fails while s1 is running at q

slide-63
SLIDE 63

Semantics of Resilient X10

✤ Happens Before Invariance ✤ failure of place q does not alter the happens before relationship

between statement instances at places other than q

at(0) { at(p) finish at(q) async s1 s2} at(0) finish { at(p){at(q) async s1} s2} throws v

flows at place 0 while s2 is running flows at place 0 discarding s1 DPE⊗

slide-64
SLIDE 64

Equational theory for (Resilient) X10

equivalent configurations when

hs, gi ⇠ = ht, gi

✤ transition steps are weakly bi-simulated ✤ under any modification of the shared heap by current activities

(object field update, object creation, place failure)

slide-65
SLIDE 65

Equational theory for (Resilient) X10

equivalent configurations when

hs, gi ⇠ = ht, gi

✤ transition steps are weakly bi-simulated ✤ under any modification of the shared heap by current activities

(object field update, object creation, place failure)

hs, gi R ht, gi whenever

  • 1. `isSync s iff `isSync t
  • 2. ∀p, ∀Φ environment move

if hs, Φ(g)i

λ

  • !p hs0, g0i then 9t0. ht, Φ(g)i

λ

= )p ht0, g0i with hs0, g0i R ht0, g0i and viceversa

slide-66
SLIDE 66

Equational theory for (Resilient) X10

equivalent configurations when

hs, gi ⇠ = ht, gi

✤ transition steps are weakly bi-simulated ✤ under any modification of the shared heap by current activities

(object field update, object creation, place failure)

hs, gi R ht, gi whenever

  • 1. `isSync s iff `isSync t
  • 2. ∀p, ∀Φ environment move

if hs, Φ(g)i

λ

  • !p hs0, g0i then 9t0. ht, Φ(g)i

λ

= )p ht0, g0i with hs0, g0i R ht0, g0i and viceversa

models the update of g:

dom(Φ(g)) = dom(g) and ∀p∈dom(g) dom(g(p)) ⊆ dom(Φ(g)(p))

slide-67
SLIDE 67

Equational theory for (Resilient) X10

equivalent configurations when

hs, gi ⇠ = ht, gi

✤ transition steps are weakly bi-simulated ✤ under any modification of the shared heap by current activities

(object field update, object creation, place failure)

hs, gi R ht, gi whenever

  • 1. `isSync s iff `isSync t
  • 2. ∀p, ∀Φ environment move

if hs, Φ(g)i

λ

  • !p hs0, g0i then 9t0. ht, Φ(g)i

λ

= )p ht0, g0i with hs0, g0i R ht0, g0i and viceversa

Bisimulation whose Bisimilarity is a congruence

models the update of g:

dom(Φ(g)) = dom(g) and ∀p∈dom(g) dom(g(p)) ⊆ dom(Φ(g)(p))

slide-68
SLIDE 68

R

Equational theory for (Resilient) X10

{{s t} u} ∼ = {s {t u}} ` isAsync s try {s t} catch u ⇠ = {try s catch u try t catch u} at(p){s t} ∼ = {at(p)s at(p)t} at(p)at(q)s ∼ = at(q)s async at(p)s ∼ = at(p) async s finish {s t} ∼ = finish s finish t finish {s async t} ∼ = finish {s t} finish at(p) s ∼ = at(p) finish s

if s throws a sync exc. and home is failed, then l.h.s. throws a masked DPEx while r.h.s. re-throws vx since synch exc are not masked by DPE

R R R

slide-69
SLIDE 69

Conclusions

✤ Concurrecy is critical for Programming Languages ✤ heterogeneous concurrency models (Distribution)

  • ✤ What is the right level of abstraction?

✤ What are good abstractions? Expressive, flexible, easy to reason

about, easy to implement in a scalable/resilient way

  • ✤ Formal method to experiment!

✤ test new primitive, new mix of primitives ✤ tool to reason about programs

slide-70
SLIDE 70

=✏, v⇥ h{s t}, gi

λ

  • !p h{s0 t}, g0i | ht, g0i

λ = v⌦ h{s t}, gi

λ

  • !p hs0, g0i | g0

hs, gi

λ

  • !p hs0, g0i | g0

` isAsync t hs, gi

λ

  • !p hs0, g0i | g0

h{t s}, gi

λ

  • !p h{t s0}, g0i | ht, g0i

(Par Left) (Par Right) (v0, g0) = copy(v, q, g) hat(q)val x = v in s, gi !p hat(q){s[v0/x] skip}, g0i (Place Shift) hs, gi

λ

  • !q hs0, g0i | g0

hat(q) s, gi

λ

  • !p hat(q) s0, g0i | g0

(At)

slide-71
SLIDE 71

hasync s, gi !p hasync s, gi (Spawn) hs, gi

λ

  • !p hs0, g0i | g0

= ✏ hasync s, gi

λ

  • !p hasync s0, g0i | g0

λ=v⇥, v⌦ hasync s, gi

v⇥

  • !p hasync s0, g0i | g0

(Async) hs, gi

λ

  • !p hs0, g0i

hfinishµ s, gi !p hfinishµ[λ s0, g0i (Finish) hfinishµ s, gi

λ0

  • !p g0

hs, gi

λ

  • !p g0

0 = E⌦ if [µ6=; else ✏ (End Finish)

slide-72
SLIDE 72

(Exception) hs, gi

λ

  • !p hs0, g0i | g0

(Try) (Skip) =✏, v⇥ htry s catch t, gi

λ

  • !p htry s0 catch t, g0i | g0

λ=v⌦ htry s catch t, gi !p h{s0 t}, g0i | ht, g0i hthrow v, gi

v⊗

  • !p g

hskip, gi !p g

Plus rules for expression evaluation