Logic and Computation
Lecture 2
Zena M. Ariola
University of Oregon
24th Estonian Winter School in Computer Science, EWSCS ’19
Logic and Computation Lecture 2 Zena M. Ariola University of - - PowerPoint PPT Presentation
Logic and Computation Lecture 2 Zena M. Ariola University of Oregon 24th Estonian Winter School in Computer Science, EWSCS 19 Curry-Howard isomorphism A correspondence between minimal propositional logic and simply typed lambda-calculus
Lecture 2
Zena M. Ariola
University of Oregon
24th Estonian Winter School in Computer Science, EWSCS ’19
A correspondence between minimal propositional logic and simply typed lambda-calculus Types (→, +, ×) are Propositions (→, ∧, ∨) Terms are Proofs Computation is Eliminations of detours Extensionality is Expansion A system is both a programming language and a logic (Coq, Agda, Idris)
Extend the isomorphism to more expressive systems
Extend the isomorphism to more expressive systems Logic Type Theory Second-order propositional logic Polymorphism
Extend the isomorphism to more expressive systems Logic Type Theory Second-order propositional logic Polymorphism Intuitionistic logic λ-calculus + Abort
Extend the isomorphism to more expressive systems Logic Type Theory Second-order propositional logic Polymorphism Intuitionistic logic λ-calculus + Abort Classical logic λ-calculus + Jumps
Extend the isomorphism to more expressive systems Logic Type Theory Second-order propositional logic Polymorphism Intuitionistic logic λ-calculus + Abort Classical logic λ-calculus + Jumps Compilation ≈ logical embeddings
A ⊢ A Ax ⊢ A → A →I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof?
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof? X ⊢ X Ax ⊢ X → X →I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof? X ⊢ X Ax ⊢ X → X →I X ⊢ X Ax ⊢ X → X →I ⊢ ∀X.X → X ∀ I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof? X ⊢ X Ax ⊢ X → X →I X ⊢ X Ax ⊢ X → X →I ⊢ ∀X.X → X ∀ I What about this proof? X ⊢ X Ax X ⊢ ∀X.X ∀ I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof? X ⊢ X Ax ⊢ X → X →I X ⊢ X Ax ⊢ X → X →I ⊢ ∀X.X → X ∀ I What about this proof? X ⊢ X Ax X ⊢ ∀X.X ∀ I X ⊢ X X ⊢ ∀X.X ∀I X ⊢ B ∀E ⊢ X → B →I
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof? X ⊢ X Ax ⊢ X → X →I X ⊢ X Ax ⊢ X → X →I ⊢ ∀X.X → X ∀ I What about this proof? X ⊢ X Ax X ⊢ ∀X.X ∀ I X ⊢ X X ⊢ ∀X.X ∀I X ⊢ B ∀E ⊢ X → B →I Γ ⊢ A X does not occur free in Γ Γ ⊢ ∀X.A ∀ I Γ ⊢ ∀X.A no variable capture occurs Γ ⊢ A[B/X] ∀ E
A ⊢ A Ax ⊢ A → A →I A → B ⊢ A → B Ax ⊢ (A → B) → (A → B) →I A ∧ B ⊢ A ∧ B Ax ⊢ (A ∧ B) → (A ∧ B) →I How do we express the fact that they are the same proof? X ⊢ X Ax ⊢ X → X →I X ⊢ X Ax ⊢ X → X →I ⊢ ∀X.X → X ∀ I What about this proof? X ⊢ X Ax X ⊢ ∀X.X ∀ I X ⊢ X X ⊢ ∀X.X ∀I X ⊢ B ∀E ⊢ X → B →I Γ ⊢ A X does not occur free in Γ Γ ⊢ ∀X.A ∀ I Γ ⊢ ∀X.A no variable capture occurs Γ ⊢ A[B/X] ∀ E ⊢ (∀X.X) → A ⊢ A → ∀X.((A → X) → X) ⊢ A → ∀X.X
Girard believed in Howard’s approach that proofs are mathematical objects. He introduced System F as a representations of proofs in second-order propositional logic
Girard believed in Howard’s approach that proofs are mathematical objects. He introduced System F as a representations of proofs in second-order propositional logic A ∧ B = ∀X.(A → B → X) → X A ∨ B = ∀X.(A → X) → (B → X) → X ⊥ = ∀X.X nat = ∀X.X → (X → X) → X bool = ∀X.X → X → X
Girard believed in Howard’s approach that proofs are mathematical objects. He introduced System F as a representations of proofs in second-order propositional logic A ∧ B = ∀X.(A → B → X) → X A ∨ B = ∀X.(A → X) → (B → X) → X ⊥ = ∀X.X nat = ∀X.X → (X → X) → X bool = ∀X.X → X → X
If Γ ⊢ M : A then it does not exists an infinite reduction starting from M
The simple type system we have seen so far forces us to duplicate code: sortI: int list -> (int->int->bool)->int list sortR: real list->(real->real->bool)->real list
The simple type system we have seen so far forces us to duplicate code: sortI: int list -> (int->int->bool)->int list sortR: real list->(real->real->bool)->real list Weaken the type system by introducing a universal type : void qsort (void* base, int num, int size, int (*comparator)(const void*,const void*));
The simple type system we have seen so far forces us to duplicate code: sortI: int list -> (int->int->bool)->int list sortR: real list->(real->real->bool)->real list Weaken the type system by introducing a universal type : void qsort (void* base, int num, int size, int (*comparator)(const void*,const void*)); Enrich the type system by allowing to express the fact that the function’s behavior is uniform for different type instantiation
Given the expressions M=(2+2)+(2+2) and N =(3+3)+(3+3) we are accustomed to abstract over the expressions 2+2 and 3+3 giving the function λx.x + x so that M=(λx.x + x) (2+2) N =(λx.x + x) (3+3) Given the types : τ = int list -> (int->int->bool)->int list σ = real list->(real->real->bool)->real list why not abstracting over the types int and real giving the function type: forall α.α list -> (α -> α -> bool) -> α list so that τ = ( forall α.α list -> (α -> α -> bool) -> α list) int σ = (forall α.α list -> (α -> α -> bool) -> α list) real The same idea to avoid duplication of code applies to avoid replication at the type level
Terms M ::= λx : σ.M | M M | x | Λα.M | M [σ] Types σ ::= α | σ → σ | ∀α.σ Type system Γ, x : σ ⊢ x : σ Γ ⊢ M : σ → τ Γ ⊢ N : σ Γ ⊢ M N : τ Γ, x : σ ⊢ M : τ Γ ⊢ λx : σ.M : σ → τ Γ ⊢ M : σ α not free in Γ Γ ⊢ Λα.M : ∀α.σ Γ ⊢ M : ∀α.σ Γ ⊢ M[τ] : σ[τ/α] Reduction (λx.M) N → M[N/x] (Λα.M) [σ] → M[σ/α] Expansion λx.M x → M Λα.M [α] → M
⊢ (∀X.X) → A ⊢ A → ∀X.((A → X) → X) ⊢ A → ∀X.X ⊢ (∀α.α) → σ ⊢ σ → ∀α.((σ → α) → α) ⊢ σ → ∀α.α z : ∀α.α ⊢ z : ∀α.α Ax z : ∀α.α ⊢ z[σ] : σ ∀ E ⊢ λz : (∀α.α).z[σ] : (∀α.α) → σ →I z : σ, y : σ → α ⊢ y : σ → α Ax z : σ, y : σ → α ⊢ z : σ Ax z : σ, y : σ → α ⊢ y z : σ →E z : σ ⊢ λy : σ → α.y z : (σ → α) → α →I z : σ ⊢ Λα.λy : σ → α.y z : ∀α.((σ → α) → α) ∀ I ⊢ λz : σ.Λα.λy : σ → α.y z : σ → ∀α.((σ → α) → α) →I
λω λC λ2 λP2 λω λPω λ→ λP
Intuitionistic Logic = Minimal Logic + Ex Falso Qodlibet
Intuitionistic Logic = Minimal Logic + Ex Falso Qodlibet Formulae: A, B ::= X | A → B | A ∧ B | A ∨ B | ⊥ ¬A = A → ⊥
Intuitionistic Logic = Minimal Logic + Ex Falso Qodlibet Formulae: A, B ::= X | A → B | A ∧ B | A ∨ B | ⊥ ¬A = A → ⊥ No introduction rule for ⊥
Intuitionistic Logic = Minimal Logic + Ex Falso Qodlibet Formulae: A, B ::= X | A → B | A ∧ B | A ∨ B | ⊥ ¬A = A → ⊥ No introduction rule for ⊥ One elimination rule for ⊥ (Ex Falso Qodlibet): Γ ⊢ ⊥ Γ ⊢ A EFQ
Intuitionistic Logic = Minimal Logic + Ex Falso Qodlibet Formulae: A, B ::= X | A → B | A ∧ B | A ∨ B | ⊥ ¬A = A → ⊥ No introduction rule for ⊥ One elimination rule for ⊥ (Ex Falso Qodlibet): Γ ⊢ ⊥ Γ ⊢ A EFQ Local reduction: E ⊥ A EFQ D B
Intuitionistic Logic = Minimal Logic + Ex Falso Qodlibet Formulae: A, B ::= X | A → B | A ∧ B | A ∨ B | ⊥ ¬A = A → ⊥ No introduction rule for ⊥ One elimination rule for ⊥ (Ex Falso Qodlibet): Γ ⊢ ⊥ Γ ⊢ A EFQ Local reduction: E ⊥ A EFQ D B = ⇒ E ⊥ B EFQ
What are terms of type σ → ⊥?
What are terms of type σ → ⊥? They are special functions: they never return. We call these functions continuations
What are terms of type σ → ⊥? They are special functions: they never return. We call these functions continuations One predefined continuation is the top-level also called the prompt - tp
What are terms of type σ → ⊥? They are special functions: they never return. We call these functions continuations One predefined continuation is the top-level also called the prompt - tp Invoking the top-level means aborting the program.
What are terms of type σ → ⊥? They are special functions: they never return. We call these functions continuations One predefined continuation is the top-level also called the prompt - tp Invoking the top-level means aborting the program.
M ::= x | λx.M | M M | Abort M
What are terms of type σ → ⊥? They are special functions: they never return. We call these functions continuations One predefined continuation is the top-level also called the prompt - tp Invoking the top-level means aborting the program.
M ::= x | λx.M | M M | Abort M fun product nil = 0 | product (x :: xs) = if x=0 then Abort 0 else x∗(prod xs)
What is the type of Abort 0?
What is the type of Abort 0? Abort 0 + 2
What is the type of Abort 0? Abort 0 + 2 (Abort 0) 9
What is the type of Abort 0? Abort 0 + 2 (Abort 0) 9 if Abort 0 then...else.....
What is the type of Abort 0? Abort 0 + 2 (Abort 0) 9 if Abort 0 then...else..... It seems that Abort 0 can have any type. So we have: Γ ⊢ M :? Γ ⊢ Abort M : σ What is the restriction on M ?
What is the type of Abort 0? Abort 0 + 2 (Abort 0) 9 if Abort 0 then...else..... It seems that Abort 0 can have any type. So we have: Γ ⊢ M :? Γ ⊢ Abort M : σ What is the restriction on M ? Whatever the top-level is expecting! Γ, tp : ¬τ ⊢ M : τ Γ, tp : ¬τ ⊢ Abort M : σ Example
tp : ¬int ⊢ Abort 6 : σ tp : ¬bool ⊢ Abort true : σ tp : ¬bool ⊢ Abort 5 : σ
Abort M should be read as “ throw to the top-level M": throw tp M
Γ, tp : ¬τ ⊢ tp : ¬τ Γ, tp : ¬τ ⊢ M : τ Γ, tp : ¬τ ⊢ tp M : ⊥ →E Γ, tp : ¬τ ⊢ throw (tp M) : σ EFQ
E ⊥ A EFQ D B
E ⊥ A EFQ D B = ⇒ E ⊥ B EFQ product([2,3,0,9,8])=2*product([3,0,9,8])=2*3*product([0,9,8]) =2*3*Abort 0=2*Abort 0=Abort 0
E ⊥ A EFQ D B = ⇒ E ⊥ B EFQ product([2,3,0,9,8])=2*product([3,0,9,8])=2*3*product([0,9,8]) =2*3*Abort 0=2*Abort 0=Abort 0
E ⊥ A EFQ D B = ⇒ E ⊥ B EFQ product([2,3,0,9,8])=2*product([3,0,9,8])=2*3*product([0,9,8]) =2*3*Abort 0=2*Abort 0=Abort 0
Example What is the result of (λx.Abort 0)(Abort 1)?
Call-by-name Evaluation Contexts and the notion of Values: E ::= | E M V ::= M Call-by-value Evaluation Contexts and the notion of Values: E ::= | E M | V E V ::= x | λx.M Reduction semantics: (λx.M) V → M[V/x] E[Abort M] → Abort M
Call-by-name Evaluation Contexts and the notion of Values: E ::= | E M V ::= M Call-by-value Evaluation Contexts and the notion of Values: E ::= | E M | V E V ::= x | λx.M Reduction semantics: (λx.M) V → M[V/x] E[Abort M] → Abort M Example (λx.Abort 0)(Abort 1) evaluates to 1 in CBV and to 0 in CBN
Classical Logic is obtained by adding one of the following axioms to Intuitionistic logic: A ∨ ¬A Tertium non datur - Law of Excluded Middle EM ¬¬A → A Law of Double Negation DN (¬A → ⊥) → A Reductio ad absurdum - Proof by Contradiction PBC (¬A → A) → A Consequentia mirabilis - Weak Pierce Law PL⊥ ((A → B) → A) → A Pierce law PL
Classical Logic is obtained by adding one of the following axioms to Intuitionistic logic: A ∨ ¬A Tertium non datur - Law of Excluded Middle EM ¬¬A → A Law of Double Negation DN (¬A → ⊥) → A Reductio ad absurdum - Proof by Contradiction PBC (¬A → A) → A Consequentia mirabilis - Weak Pierce Law PL⊥ ((A → B) → A) → A Pierce law PL
A B A∨¬A ¬¬A → A (¬A → ⊥) → A (¬A → A) → A ((A → B) → A) → A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Proving that something is true is the same as proving that it cannot be false
Proving that something is true is the same as proving that it cannot be false Since ¬¬(A ∨ ¬A) is true than A ∨ ¬A holds.
Proving that something is true is the same as proving that it cannot be false Since ¬¬(A ∨ ¬A) is true than A ∨ ¬A holds. Proving that A ∨ ¬A holds means providing evidence of either A or ¬A.
Proving that something is true is the same as proving that it cannot be false Since ¬¬(A ∨ ¬A) is true than A ∨ ¬A holds. Proving that A ∨ ¬A holds means providing evidence of either A or ¬A. David Hilbert’s words from 1927: Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists. To prohibit the principle of excluded middle is tantamount to relinquishing the science of mathematics altogether.
Proving that something is true is the same as proving that it cannot be false Since ¬¬(A ∨ ¬A) is true than A ∨ ¬A holds. Proving that A ∨ ¬A holds means providing evidence of either A or ¬A. David Hilbert’s words from 1927: Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists. To prohibit the principle of excluded middle is tantamount to relinquishing the science of mathematics altogether. ⊢ ∃x.Drink(x) → ∀x.Drink(x)
Weak Pierce Law ((¬A → A) → A) and Excluded Middle (A ∨ ¬A) are equivalent Double negation (¬¬A → A) implies Pierce Law (((A → B) → A) → A ) but not conversely. Double negation, Excluded Middle + EFQ, Weak Pierce Law + EFQ, and Pierce Law + EFQ are all equivalent Intuitionist Logic = Minimal Logic + EFQ Minimal Classical Logic = Minimal Logic + Pierce Law Classical Logic = Minimal Logic + Pierce Law + EFQ
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value.
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value. The continuation can be seen as a function taking the value of e and delivering the value of the program M
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value. The continuation can be seen as a function taking the value of e and delivering the value of the program M Given the expression (2 + 3) + (7 + 8) and assuming the evaluation of the arithmetic expressions is lef-to-right : The continuation of (2 + 3) is
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value. The continuation can be seen as a function taking the value of e and delivering the value of the program M Given the expression (2 + 3) + (7 + 8) and assuming the evaluation of the arithmetic expressions is lef-to-right : The continuation of (2 + 3) is the function λv.v + (7 + 8)
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value. The continuation can be seen as a function taking the value of e and delivering the value of the program M Given the expression (2 + 3) + (7 + 8) and assuming the evaluation of the arithmetic expressions is lef-to-right : The continuation of (2 + 3) is the function λv.v + (7 + 8) The continuation of (7 + 8) is
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value. The continuation can be seen as a function taking the value of e and delivering the value of the program M Given the expression (2 + 3) + (7 + 8) and assuming the evaluation of the arithmetic expressions is lef-to-right : The continuation of (2 + 3) is the function λv.v + (7 + 8) The continuation of (7 + 8) is the function λv.5 + v. Note that the continuation is not λv.(7 + 8) + v since by the time the evaluation gets to the expression (7 + 8), the expression (2 + 3) has already been evaluated
Given a program M and a subexpression e of M, the continuation of e is what remains to be done afer the execution of e has delivered a value. The continuation can be seen as a function taking the value of e and delivering the value of the program M Given the expression (2 + 3) + (7 + 8) and assuming the evaluation of the arithmetic expressions is lef-to-right : The continuation of (2 + 3) is the function λv.v + (7 + 8) The continuation of (7 + 8) is the function λv.5 + v. Note that the continuation is not λv.(7 + 8) + v since by the time the evaluation gets to the expression (7 + 8), the expression (2 + 3) has already been evaluated What will happen if we now assume a right-to-lef evaluation?
Let’s add the possibility to the programmer to grab the continuation The first extension of functional programming with first-class control was done by Peter Landin (1965): Example (Code) f=fn x.let g1=fn y.N1 g2=J(fn z.N2) in M When g2 is called, it does not return to where it was called, but to where f was called. callcc (call with current continuation) in Scheme.
Given callcc(λk.M):
Given callcc(λk.M): Variable k is bound to the continuation of the callcc expression
Given callcc(λk.M): Variable k is bound to the continuation of the callcc expression M is then evaluated
Given callcc(λk.M): Variable k is bound to the continuation of the callcc expression M is then evaluated If continuation k is never invoked during the evaluation of M, then the value of M is the result of the entire callcc expression
Given callcc(λk.M): Variable k is bound to the continuation of the callcc expression M is then evaluated If continuation k is never invoked during the evaluation of M, then the value of M is the result of the entire callcc expression If continuation k is invoked during the evaluation of M, with say value v, evaluation of M is aborted and control returns to k with value v E[callcc(λk.M)] → E[M[λx.E[x]/k]] E[throw k M] → throw k M] Example callcc(λk.1 + throw k 0 + fib 100) + 4 → (1 + throw (λx.(x + 4)) 0 + fib 100) + 4 → throw (λx.(x + 4)) 0 → 0 + 4 → 4
Given C(λk.M): Variable k is bound to the continuation of the C expression M is then evaluated If continuation k is never invoked during the evaluation of M, then the value of M, say v, is the result of the entire program containing the C-expression. In
If continuation k is invoked during the evaluation of M, with value v, evaluation of M is aborted and control returns to k with value v E[C(λk.M)] → M[λx.(E[x])/k] Example C(λk.99) + 1 → 99 whereas callcc(λk.99) + 1 → 100.
Summarizing we have three control operators: Abort, callcc and C:
Summarizing we have three control operators: Abort, callcc and C: C encodes Abort: Abort M = C(λ_.M)
Summarizing we have three control operators: Abort, callcc and C: C encodes Abort: Abort M = C(λ_.M) C encodes callcc: callcc M = C(λk.k(M k))
Γ ⊢ M : ⊥ Γ ⊢ Abort M : σ
Γ ⊢ M : ⊥ Γ ⊢ Abort M : σ Γ, k : ¬A ⊢ M : A Γ ⊢ callcc(λk.M) : A
Γ ⊢ M : ⊥ Γ ⊢ Abort M : σ Γ, k : ¬A ⊢ M : A Γ ⊢ callcc(λk.M) : A Γ, k : ¬A ⊢ M : ⊥ Γ ⊢ C(λk.M) : A
Intuitionist Logic = Minimal Logic + EFQ Minimal Classical Logic = Minimal Logic + Pierce Law Classical Logic = Intuitionist Logic + Pierce Law = Minimal Logic + EFQ + Pierce Law
Intuitionist Logic = Minimal Logic + EFQ Minimal Classical Logic = Minimal Logic + Pierce Law Classical Logic = Intuitionist Logic + Pierce Law = Minimal Logic + EFQ + Pierce Law Logic Type Theory Minimal Logic λ-calculus Intuitionistic Logic λ-calculus + Abort
Intuitionist Logic = Minimal Logic + EFQ Minimal Classical Logic = Minimal Logic + Pierce Law Classical Logic = Intuitionist Logic + Pierce Law = Minimal Logic + EFQ + Pierce Law Logic Type Theory Minimal Logic λ-calculus Intuitionistic Logic λ-calculus + Abort Minimal Classical λ-calculus + callcc + throw
Intuitionist Logic = Minimal Logic + EFQ Minimal Classical Logic = Minimal Logic + Pierce Law Classical Logic = Intuitionist Logic + Pierce Law = Minimal Logic + EFQ + Pierce Law Logic Type Theory Minimal Logic λ-calculus Intuitionistic Logic λ-calculus + Abort Minimal Classical λ-calculus + callcc + throw Classical logic λ-calculus + callcc + throw + tp
In Proofs and Types, Girard says: Intuitionistic logic is called constructive because of the correspondence be- tween proofs and algorithms. So, for example, if we prove a formula ∃n.P(n), we can exhibit an integer n which satisfies the property P. Such an interpreta- tion is not possible with classical logic: there is no sensible way of considering proofs as algorithms. In fact, classical logic has no denotational semantics, except the trivial one which identifies all the proofs of the same type. This is related to the nondeterministic behaviour of cut elimination.
How do we compile programs with control operators?
How do we compile programs with control operators? Embed the evaluation order directly in the program
How do we compile programs with control operators? Embed the evaluation order directly in the program Call-by-name evaluation : [ [c] ] = λk.k c [ [x] ] = λk.x k [ [λx.M] ] = λk.k (λx. [ [M] ]) [ [M N] ] = λk. [ [M] ] (λf .f [ [N] ] k) Example ⊢ 5 : int and [ [5] ] = λk.k 5 : (int → ⊥) → ⊥. λx.x : int → int and [ [λx.x] ] = λk.k(λx.λq.x q) : ¬ [ [int → int] ] → ⊥, where [ [int → int] ] = (¬int → ⊥) → ¬int → ⊥. [ [callcc M] ] = λk. [ [M] ] (λf .f (λx.λk′.x k)k) [ [C M] ] = λk. [ [M] ] (λf .f (λx.λk′.x k)λx.x)
Theorem If ⊢ M : σ then [ [M] ] : ¬¬σ∗, where σ∗ is (σ → τ)∗ = ¬¬σ∗ → ¬¬τ ∗ b∗ = b Continuation-passing style transformation is related to proof translations of classical mathematics into intuitionistic mathematics. These are referred to as negative translations. The most known are the translations due to Kolmogorov, Gödel, Gentzen, Kuroda and Krivine. Theorem If a formula A is provable in classical logic, then [ [A] ] is provable in intuitionistic logic.