The Core Method Steffen H olldobler International Center for - - PowerPoint PPT Presentation

the core method
SMART_READER_LITE
LIVE PREVIEW

The Core Method Steffen H olldobler International Center for - - PowerPoint PPT Presentation

The Core Method Steffen H olldobler International Center for Computational Logic Technische Universit at Dresden Germany The Very Idea The Propositional CORE Method Human Reasoning Steffen H olldobler The Core Method 1


slide-1
SLIDE 1

◮ The Very Idea ◮ The Propositional CORE Method ◮ Human Reasoning

Steffen H¨

  • lldobler

The Core Method 1

The Core Method

Steffen H¨

  • lldobler

International Center for Computational Logic Technische Universit¨ at Dresden Germany

slide-2
SLIDE 2

The Very Idea

◮ Various semantics for logic programs coincide with fixed points of associated

immediate consequence operators (Apt, van Emden: Contributions to the Theory of Logic Programming. Journal of the ACM 29, 841-862: 1982).

◮ Banach Contraction Mapping Theorem A contraction mapping f defined on

a complete metric space (X, d) has a unique fixed point. The sequence y, f(y), f(f(y)), . . . converges to this fixed point for any y ∈ X. ⊲ Consider programs whose immediate consequence operator is a

  • contraction. (Fitting: Metric Methods – Three Examples and a Theorem.

Journal of Logic Programming 21, 113-127: 1994).

◮ Every continuous function on the reals can be uniformly approximated by feed-

forward connectionist networks (Funahashi: On the Approximate Realization of Continuous Mappings by Neural Networks. Neural Networks 2, 183-192: 1989). ⊲ Consider programs whose immediate consequence operator is continuous. (H., Kalinke, St¨

  • rr: Approximating the Semantics of Logic Programs

by Recurrent Neural Networks. Applied Intelligence 11, 45-59: 1999).

Steffen H¨

  • lldobler

The Core Method 2

slide-3
SLIDE 3

First Ideas

◮ H., Kalinke: Towards a New Massively Parallel Computational Model for Logic

Programming In: Proceedings of the ECAI94 Workshop on Combining Symbolic and Connectionist Processing, 68-77: 1994.

Steffen H¨

  • lldobler

The Core Method 3

slide-4
SLIDE 4

Interpretations

◮ Let L be a propositional language and {⊤, ⊥} the set of truth values. ◮ An interpretation I is a mapping L → {⊤, ⊥}. ◮ For a given program P, an interpretation I can be represented by

the set of atoms occurring in P which are mapped to ⊤ under I, i.e. I ⊆ RP.

◮ 2RP is the set of all interpretations for P. ◮ (2RP , ⊆) is a complete lattice. ◮ An interpretation I for P is a model for P iff I(P) = ⊤.

Steffen H¨

  • lldobler

The Core Method 4

slide-5
SLIDE 5

Immediate Consequence Operator

◮ Immediate consequence operator T

P : 2RP → 2RP :

T

P(I) = {A | there is a clause A ← L1 ∧ . . . ∧ Ln ∈ P

such that I | = L1 ∧ . . . ∧ Ln}.

◮ I is a supported model iff T

P(I) = I.

◮ Let lfp T

P be the least fixed point of T P if it exists.

Steffen H¨

  • lldobler

The Core Method 5

slide-6
SLIDE 6

The Propositional CORE Method

◮ Let L be a propositional logic language. ◮ Given a logic program P together with immediate consequence operator T

P.

◮ Let |RP| = m and 2RP be the set of interpretations for P. ◮ Find a mapping rep : 2RP → Rm. ◮ Construct a feed-forward network computing fP : Rm → Rm, called the core,

such that the following holds: ⊲ If T

P(I) = J then fP(rep(I)) = rep(J), where I, J ∈ 2RP .

⊲ If fP(s) = t then T

P(rep−1(s)) = rep−1(t), where s, t ∈ Rm.

◮ Connect the units in the output layer recursively to the units in the input layer. ◮ Show that the following holds

⊲ I = lfp T

P iff the recurrent network converges to rep(I),

i.e. it reaches a stable state with input and output layer representing rep(I).

◮ Connectionist model generation using recurrent networks with feed-forward core.

Steffen H¨

  • lldobler

The Core Method 6

slide-7
SLIDE 7

3-Layer Recurrent Networks

. . . input layer . . . hidden layer core . . .

  • utput layer

1 1

◮ At each point in time all units do:

⊲ apply activation function to obtain potential, ⊲ apply output function to obtain output.

Steffen H¨

  • lldobler

The Core Method 7

slide-8
SLIDE 8

Propositional CORE Method using Binary Threshold Units

◮ Let L be the language of propositional logic. ◮ Let P be a propositional logic program, e.g.,

P = {p, r ← p ∧ ∼q, r ← ∼p ∧ q}.

◮ T

P(I) = {A | A ← L1 ∧ . . . ∧ Lm ∈ P such that I |

= L1 ∧ . . . ∧ Lm}. T

P(∅)

= {p} T

P({p})

= {p, r} T

P({p, r})

= {p, r} = lfp T

P

Steffen H¨

  • lldobler

The Core Method 8

slide-9
SLIDE 9

Representing Interpretations

◮ 2RP ◮ Let m = |RP| and identify RP with {1, . . . , m}. ◮ Define rep : 2RP → Rm such that for all 1 ≤ j ≤ m we find:

rep(I)[j] = 1 if j ∈ I, if j ∈ I. E.g., if RP = {p, q, r} = {1, 2, 3} and I = {p, r} then rep(I) = (1, 0, 1).

◮ Other encodings are possible, e.g.,

rep′(I)[j] = 1 if j ∈ I, −1 if j ∈ I.

◮ We can represent interpretations by arrays of binary or bipolar threshold units.

Steffen H¨

  • lldobler

The Core Method 9

slide-10
SLIDE 10

Computing the Core

◮ Theorem For each program P,

there exists a core of logical threshold units computing T

P.

◮ Proof Let P be a program, m = |RP|, and ω ∈ R+.

Wlog we assume that all occurrences of ⊤ in P have been eliminated. ⊲ Translation Algorithm 1 Input and output layer: vector of length m of binary threshold units with threshold 0.5 and ω/2 in the input and output layer, respectively. 2 For each clause of the form A ← L1 ∧ . . . ∧ Lk ∈ P, k ≥ 0, do: 2.1 Add a binary threshold unit uh to the hidden layer. 2.2 Connect uh to the unit representing A in the output layer with weight ω. 2.3 For each literal Lj, 1 ≤ j ≤ k, connect the unit representing Lj in the input layer to uh and, if Lj is an atom then set the weight to ω otherwise set the weight to −ω. 2.4 Set the threshold of uh to (p − 0.5)ω, where p is the number of positive literals occurring in L1 ∧ . . . ∧ Lk.

Steffen H¨

  • lldobler

The Core Method 10

slide-11
SLIDE 11

Computing the Core (Continued)

◮ Theorem For each program P,

there exists a core of logical threshold units computing T

P.

◮ Proof (Continued) Some observations:

⊲ uh becomes active at time t + 1 iff L1 ∧ . . . ∧ Lk is mapped to ⊤ by the interpretation represented by the state of the input layer at time t. ⊲ The unit representing A in the output layer becomes active at time t + 2 iff there is a rule of the form A ← L1 ∧ . . . ∧ Lk ∈ P and the unit uh in the hidden layer corresponding to this rule is active at time t + 1. ⊲ The result follows immediately from these observations. qed

Steffen H¨

  • lldobler

The Core Method 11

slide-12
SLIDE 12

Computing the Core (Example)

◮ Consider again P = {p, r ← p ∧ ∼q, r ← ∼p ∧ q}. ◮ The translation algorithm yields:

1 2

p

1 2

q

1 2

r

  • ω

2 ω 2 ω 2 ω 2

p

ω 2

q

ω 2

r ω −ω −ω ω ω ω ω

Steffen H¨

  • lldobler

The Core Method 12

slide-13
SLIDE 13

Hidden Layers are Needed

◮ The XOR can be represented by the program {r ← p ∧ ∼q, r ← ∼p ∧ q}. ◮ Proposition 2-layer networks cannot compute T

P for definite P.

◮ Proof Suppose there exist 2-layer networks for definite P computing T

P.

⊲ Consider P = {p ← q, p ← r ∧ s, p ← t ∧ u}. .5 p 1 .5 q 2 .5 r 3 .5 s 4 .5 t 5 .5 u 6 θ7 7 p

ω 2

8 q

ω 2

9 r

ω 2

10 s

ω 2

11 t

ω 2

12 u w72 w76 ⊲ Let v be the state of the input layer; v is an interpretation.

Steffen H¨

  • lldobler

The Core Method 13

slide-14
SLIDE 14

Hidden Layers are Needed (Continued)

◮ Proposition 2-layer networks cannot compute T

P for definite P.

◮ Proof (Continued) Consider P = {p ← q, p ← r ∧ s, p ← t ∧ u}.

⊲ We have to find θ7 and w7j, 2 ≤ j ≤ 6, such that p ∈ T

P(v) iff w72v2 + w73v3 + w74v4 + w75v5 + w76v6 ≥ θ7.

⊲ Because conjunction is commutative we find p ∈ T

P(v) iff w72v2 + w74v3 + w73v4 + w76v5 + w75v6 ≥ θ7.

⊲ Consequently, p ∈ T

P(v) iff w72v2 + w1(v3 + v4) + w2(v5 + v6) ≥ θ7,

where w1 = 1

2(w73 + w74) and w2 = 1 2 (w75 + w76).

Steffen H¨

  • lldobler

The Core Method 14

slide-15
SLIDE 15

Hidden Layers are Needed (Continued)

◮ Proposition 2-layer networks cannot compute T

P for definite P.

◮ Proof (Continued) Consider P = {p ← q, p ← r ∧ s, p ← t ∧ u}.

⊲ Likewise, because disjunction is commutative we find p ∈ T

P(v) iff w · x ≥ θ7,

where w = 1

3(w72 + w1 + w2) and x = 7 j=2 vi.

⊲ For the network to compute T

P the following must hold:

◮ ◮ If x = 0 (v2 = . . . = v6 = 0) then w · x − θ7 < 0. ◮ ◮ If x = 1 (v2 = 1, v3 = . . . = v6 = 0) then w · x − θ7 ≥ 0. ◮ ◮ If x = 2 (v2 = v4 = v6 = 0, v3 = v5 = 1) then w · x − θ7 < 0.

⊲ However, d(w·x−θ7)

dx

= w cannot change its sign; contradiction. ⊲ Consequently, 2-layer feed-forward networks cannot compute T

P

qed

Steffen H¨

  • lldobler

The Core Method 15

slide-16
SLIDE 16

Adding Recurrent Connections

◮ Recall P = {p, r ← p ∧ ∼q, r ← ∼p ∧ q}.

.5 .5 .5

  • ω

2 ω 2 ω 2 ω 2 ω 2 ω 2

  • ω

2 ω 2

.5

ω 2 ω 2

.5

Steffen H¨

  • lldobler

The Core Method 16

slide-17
SLIDE 17

On the Existence of Least Fixed Points

◮ Theorem For definite programs P, T

P has a least fixed point

which can be obtained by iterating T

P starting with the empty interpretation.

(Apt, van Emden: Contributions to the Theory of Logic Programming. Journal of the ACM 29, 841-862: 1982.)

◮ In general, however, least fixed points do not always exist.

⊲ Consider P = {p ← ∼q, q ← ∼p} .5 p .5 q

  • ω

2

  • ω

2 ω 2

p

ω 2

q −ω −ω ω ω

  • ω

2

  • ω

2 ω 2 ω 2

.5 .5 ⊲ The corresponding recurrent network does not reach a stable state if initialized by the empty interpretation. ⊲ It has two stable states corresponding to the interpretations {p} and {q}.

Steffen H¨

  • lldobler

The Core Method 17

slide-18
SLIDE 18

Metrics for Logic Programs

◮ Let P be program and l a level mapping for P. ◮ For interpretations I, J ⊆ RP we define

dP = if I = J 2−n if n is the smallest level on which I and J differ.

◮ Proposition 1 (2RP , dP) is a complete metric space. ◮ Proposition 2 If P is acceptable,

then there exists a metric space such that T

P is a contraction on it.

◮ For proofs of both Propositions see

Fitting: Metric Methods – Three Examples and a Theorem. Journal of Logic Programming 21, 113-127: 1994.

◮ Corollary If P is acceptable, then there exists a 3-layer recurrent network

  • f logical threshold units such that the computation starting with an

arbitrary initial input converges and yields the unique fixed point of T

P.

Steffen H¨

  • lldobler

The Core Method 18

slide-19
SLIDE 19

Time and Space Complexity

◮ Let n = |P| be the number of clauses

and m = |Rm| be the number of propositional variables occurring in P. ⊲ 2m + n units, 2mn connections in the core. ⊲ T

P(I) is computed in 2 steps.

⊲ The parallel computational model to compute T

P(I) is optimal.

(A parallel computational model requiring p(n) processors and t(n) time to solve a problem of size n is optimal if p(n) · t(n) = O(T(n)), where T(n) is the sequential time to solve this problem; see: Karp, Ramachandran: Parallel Algorithms for Shared-Memory Machines. In: Handbook of Theoretical Computer Science, Elsevier, 869-941: 1990.) ⊲ The recurrent network settles down in 3n steps in the worst case. (Dowling, Gallier: Linear-time Algorithms for Testing the Satisfiability of Propositional Horn Formulae. J. of Logic Programming 1, 267-284: 1984; Scutella: A Note on Dowling and Gallier’s Top-Down Algorithm for Propositional Horn Satisfiability. J. of Logic Programming 8: 265-272: 1990.)

◮ Exercise Give an example of a program with worst case time behavior.

Steffen H¨

  • lldobler

The Core Method 19

slide-20
SLIDE 20

Reasoning wrt Least Fixed Points

◮ Let P be a program and assume that T

P admits a least fixed point.

◮ Let lfp T

P denote the least fixed point of P.

◮ It can be shown that lfp T

P is the least model of P.

◮ We define P |

=lm F iff lfp T

P(F) = ⊤.

◮ Observe |

=lm = | =.

◮ Consider P = {p, q ← r}. Then,

⊲ lfp T

P = {p} and

⊲ P | =lm p ∧ ∼q ∧ ∼r, but ⊲ P | = ∼q and P | = ∼r.

◮ If we consider |

=lm, then negation is not classical negation. ⊲ This is the reason for using ∼ instead of ¬. ⊲ ∼ is often called negation by failure.

Steffen H¨

  • lldobler

The Core Method 20

slide-21
SLIDE 21

Extensions

◮ The approach has been extended to

⊲ many-valued logic programs (Kalinke 1994, Seda, Lane 2004), ⊲ extended logic programs (d’Avila Garcez, Broda, Gabbay 2002), ⊲ modal logic programs (d’Avila Garcez, Lamb, Gabbay 2002), ⊲ intuitionistic logic programs d’Avila Garcez, Lamb, Gabbay 2003), ⊲ first-order logic programs (H., Kalinke, St¨

  • rr 1999, Bader, Hitzler, H., Witzel 2007).

Steffen H¨

  • lldobler

The Core Method 21

slide-22
SLIDE 22

KBANN – Knowledge Based Artificial Neural Networks

◮ Towell, Shavlik: Extracting Refined Rules from Knowledge–Based

Neural Networks. Machine Learning 131, 71-101: 1993. Can we do better than empirical learning?

◮ Consider acyclic logic programs, e.g.,

P = {a ← b ∧ c ∧ ∼d, a ← d ∧ ∼e, h ← f ∧ g, k ← a ∧ ∼h}. .5 b .5 c .5 d .5 e .5 f .5 g

3ω 2 ω 2 3ω 2

h

ω 2

a

ω 2

k ω −ω

Steffen H¨

  • lldobler

The Core Method 22

slide-23
SLIDE 23

KBANN – Learning

◮ Given hierachical sets of propositional rules as background knowledge. ◮ Map rules into multi-layer feed-forward networks with sigmoidal units. ◮ Add hidden units (optional). ◮ Add units for known input features that are not referenced in the rules. ◮ Fully connect layers. ◮ Add near-zero random numbers to all links and thresholds. ◮ Apply backpropagation.

⊲ Empirical evaluation: system performs better than purely empirical and purely hand-built classifiers.

Steffen H¨

  • lldobler

The Core Method 23

slide-24
SLIDE 24

KBANN – A Problem

◮ Towel, Shavlik 1993: “Works if rules have few conditions and

there are few rules with the same head.” .5 q1 . . . .5 q9 .5 q10 .5 r1 .5 r2 . . . .5 r10

19ω 2

q

19ω 2

r

ω 2

s ω .5 .5 .5 .5

ω 2

◮ pq = pr = 9ω and vq = vr =

1 1+e−β(9ω−9.5ω) ≈ 0.46 with β = 1.

◮ ps = 0.92ω and vs =

1 1+e−β(0.92ω−0.5ω) ≈ 0.6 with β = 1.

Steffen H¨

  • lldobler

The Core Method 24

slide-25
SLIDE 25

Solving the Problem

◮ d’Avila Garcez, Zaverucha, Carvalho:

Logic Programming and Inductive Learning in Artificial Neural Networks. In: Knowledge Representation in Neural Networks (Herrmann, Strohmaier, eds.), Logos, Berlin, 33-46: 1997. Can we combine the ideas of the propositational CORE method and KBANN while avoiding the above mentioned problem?

◮ The approach has been generalized in

Bader: Neural-Symbolic Integration. PhD thesis, TU Dresden, Informatik: 2009.

Steffen H¨

  • lldobler

The Core Method 25

slide-26
SLIDE 26

Propositional CORE Method using Squashing Units

◮ Let u be a squashing unit.

⊲ Let Ψ− = limp→−∞Ψ(p) and Ψ+ = limp→∞Ψ(p). ⊲ Let a−, a0, a+ ∈ R such that Ψ− < a− < a0 < a+ < Ψ+. ⊲ u is active iff v ≥ a+. u is passive iff v ≤ a−. u is in null-state iff v = a0. u is undecided iff a− < (v = a0) < a+. ⊲ p+ = Ψ−1(a+) is called minimal activation potential. p− = Ψ−1(a−) is called maximal inactivation potential. p0 = Ψ−1(a0) is called null-state potential.

Steffen H¨

  • lldobler

The Core Method 26

slide-27
SLIDE 27

The Task

◮ How can we guarantee that a unit is either active, passive or in the null-state? ◮ Suppose input layer units output only finitely many values. ◮ Let u be a hidden layer unit. ◮ If the input layer is finite,

then the potential of u may only take finitely many different values.

◮ Let P = {p1, . . . , pn} be the set of possible values for the potential of u. ◮ Let P+ = {p ∈ P | p > p0} and P− = {p ∈ P | p < p0}. ◮ Let m = max(m−, m+), where

m+ = if P+ = ∅ |

p+ min(P+)−p0 |

  • therwise
  • , m− =

if P− = ∅ |

p− p0−min(P−) |

  • therwise
  • .

◮ Observations

⊲ If the weights on the connections to u and the threshold of u are multiplied with m, then u is either active, passive or in the null-state. ⊲ u produces only finitely many different output values. ⊲ The transformation can be applied to output layer units as well.

Steffen H¨

  • lldobler

The Core Method 27

slide-28
SLIDE 28

Example

◮ Consider a bipolar sigmoidal unit u with Ψ(p) = tanh(p). ◮ Let P = {−0.9, −0.5, −0.3, 0.0, 0.2, 0.4, 0.8}

and a− = −0.8, a0 = 0.0, a+ = 0.8.

◮ Then, p− = Ψ−1(a−) ≈ −1.1, p0 = Ψ−1(a0) = 0.0, p+ = Ψ−1(a+) ≈ 1.1. ◮ Hence, m− = |

−1.1 0.0+0.3 | = 3.662 and m+ = | 1.1 0.2−0.0| = 5.493.

◮ Thus, m = max(m−, m+) = max(3.662, 5.493) = 5.493 and we obtain

p tanh(p) tanh(m × p)

  • 0.9
  • 0.716
  • 0.999
  • 0.5
  • 0.462
  • 0.992
  • 0.3
  • 0.291
  • 0.929

0.0 0.000 0.000 0.2 0.197 0.800 0.4 0.380 0.976 0.8 0.664 0.999

◮ Exercise Specify a core of bipolar sigmoidal units for

P = {p, r ← p ∧ ∼q, r ← ∼p ∧ q} with a− = −0.9, a0 = 0.0, a+ = 0.9.

Steffen H¨

  • lldobler

The Core Method 28

slide-29
SLIDE 29

Results

◮ Relation to logic programs is preserved.

⊲ For each program P, there exists a core of squashing units computing T

P.

⊲ If P is acceptable, then there exists a 3-layer recurrent network

  • f squashing units such that the computation starting with an

arbitrary initial input converges and yields the unique fixed point of T

P.

⊲ Likewise for consistent acceptable extended logic programs.

◮ The core is trainable by backpropagation. ◮ The neural symbolic cycle

Symbolic System Connectionist System embedding extraction writable readable trainable

Steffen H¨

  • lldobler

The Core Method 29

slide-30
SLIDE 30

Cores for Three-Valued Logic Programs

◮ Consider {p ← q}. ◮ A translation algorithm translates programs into a core. ◮ Recurrent connections connect the output to the input layer.

.5 p .5 ¬p .5 q .5 ¬q

ω 2 ω 2 ω 2

p

ω 2

¬p

ω 2

q

  • ω

2

¬q ΦF Kalinke 1995, Seda, Lane 2004 .5 p .5 ¬p .5 q .5 ¬q

  • .5

  • .5

ω 2 ω 2 ω 2

p

ω 2

¬p

ω 2

q

ω 2

¬q ΦSvL new

Steffen H¨

  • lldobler

The Core Method 30

slide-31
SLIDE 31

A CORE Method for Human Reasoning

◮ Consider three-layer feed forward networks of binary threshold units. ◮ The input as well as the output layer shall represent interpretations. ◮ Theorem

For each program P there exists a feed-forward core computing ΦSvL,P.

◮ Add recurrent connections between corresponding units

in the output and the input layer.

◮ Corollary

The recurrent network reaches a stable state representing lfp ΦSvL,P if initialized with ∅, ∅.

Steffen H¨

  • lldobler

The Core Method 31

slide-32
SLIDE 32

The Suppression Task – Modus Ponens

◮ If she has an essay to write, she will study late in the library.

She has an essay to write.

◮ 96% of subjects conclude that she will study late in the library. ◮ P4 = {l ← e ∧ ¬ab, e ← ⊤, ab ← ⊥}.

  • .5

⊤ .5 e .5 ¬e .5 l .5 ¬l .5 ab .5 ¬ab

  • .5

ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 ω 2

e

ω 2

¬e

ω 2

l

ω 2

¬l

ω 2

ab

ω 2

¬ab

◮ lfp ΦSvL,P4 = lm3Ł wc P4 = {l, e}, {ab}. ◮ From {l, e}, {ab} follows that she will study late in the library.

Steffen H¨

  • lldobler

The Core Method 32

slide-33
SLIDE 33

The Suppression Task – Alternative Arguments

◮ If she has an essay to write, she will study late in the library. She has an essay

to write. If she has some textbooks to read, she will study late in the library.

◮ 96% of subjects conclude that she will study late in the library. ◮ P5 = {l ← e ∧ ¬ab1, e ← ⊤, ab1 ← ⊥, l ← t ∧ ¬ab2, ab2 ← ⊥}.

  • .5

⊤ .5 e .5 ¬e .5 l .5 ¬l .5 ab1 .5 ¬ab1 .5 t .5 ¬t .5 ab2 .5 ¬ab2

  • .5

ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 ω 2

e

ω 2

¬e

ω 2

l

3ω 2

¬l

ω 2

ab1

ω 2

¬ab1

ω 2

t

ω 2

¬t

ω 2

ab2

ω 2

¬ab2

◮ lfp ΦSvL,P5 = lm3Ł wc P5 = {e, l}, {ab1, ab2}. ◮ From {e, l}, {ab1, ab2} follows that she will study late in the library.

Steffen H¨

  • lldobler

The Core Method 33

slide-34
SLIDE 34

The Suppression Task – Additional Argument

◮ If she has an essay to write, she will study late in the library. She has an essay

to write. If the library stays open, she will study late in the library.

◮ 38% of subjects conclude that she will study late in the library. ◮ P6 = {l ← e ∧ ¬ab1, e ← ⊤, l ← o ∧ ¬ab2, ab1 ← ¬o, ab2 ← ¬e, }.

  • .5

⊤ .5 e .5 ¬e .5 l .5 ¬l .5 ab1 .5 ¬ab1 .5

  • .5

¬o .5 ab2 .5 ¬ab2

  • .5

ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 ω 2

e

ω 2

¬e

ω 2

l

3ω 2

¬l

ω 2

ab1

ω 2

¬ab1

ω 2

  • ω

2

¬o

ω 2

ab2

ω 2

¬ab2

◮ lfp ΦSvL,P6 = lm3Ł wc P6 = {e}, {ab2}. ◮ From {e}, {ab2} follows

that it is unknown whether she will study late in the library.

Steffen H¨

  • lldobler

The Core Method 34

slide-35
SLIDE 35

The Suppression Task – Denial of Antecendent (DA)

◮ If she has an essay to write, she will study late in the library.

She does not have an essay to write.

◮ 46% of subjects conclude that she will not study late in the library. ◮ P7 = {l ← e ∧ ¬ab, e ← ⊥, ab ← ⊥}.

  • .5

⊤ .5 e .5 ¬e .5 l .5 ¬l .5 ab .5 ¬ab

  • .5

ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 ω 2

e

ω 2

¬e

ω 2

l

ω 2

¬l

ω 2

ab

ω 2

¬ab

◮ lfp ΦSvL,P7 = lm3Ł wc P7 = ∅, {ab, e, l}. ◮ From ∅, {ab, e, l} follows that she will not study late in the library.

Steffen H¨

  • lldobler

The Core Method 35

slide-36
SLIDE 36

The Suppression Task – Alternative Argument and DA

◮ If she has an essay to write, she will study late in the library. She does not have

an essay to write. If she has textbooks to read, she will study late in the library.

◮ 4% of subjects conclude that Marian will not study late in the library. ◮ P8 = {l ← e ∧ ¬ab1, e ← ⊥, ab1 ← ⊥, l ← t ∧ ¬ab2, ab2 ← ⊥}.

  • .5

⊤ .5 e .5 ¬e .5 l .5 ¬l .5 ab1 .5 ¬ab1 .5 t .5 ¬t .5 ab2 .5 ¬ab2

  • .5

ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 ω 2

e

ω 2

¬e

ω 2

l

3ω 2

¬l

ω 2

ab1

ω 2

¬ab1

ω 2

t

ω 2

¬t

ω 2

ab2

ω 2

¬ab2

◮ lfp ΦSvL,P8 = lm3Ł wc P8 = ∅, {ab1, ab2, e}. ◮ From ∅, {ab1, ab2, e} follows

that it is unknown whether she will study late in the library.

Steffen H¨

  • lldobler

The Core Method 36

slide-37
SLIDE 37

The Suppression Task – Additional Argument and DA

◮ If she has an essay to write, she will study late in the library. She does not have

an essay to write. If the library is open, she will study late in the library.

◮ 63% of subjects conclude that she will not study late in the library. ◮ P9 = {l ← e ∧ ¬ab1, e ← ⊥, l ← o ∧ ¬ab2, ab1 ← ¬o, ab2 ← ¬e}.

  • .5

⊥ .5 e .5 ¬e .5 l .5 ¬l .5 ab1 .5 ¬ab1 .5

  • .5

¬o .5 ab2 .5 ¬ab2

  • .5

ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 3ω 2 ω 2 ω 2 ω 2 ω 2

e

ω 2

¬e

ω 2

l

3ω 2

¬l

ω 2

ab1

ω 2

¬ab1

ω 2

  • ω

2

¬o

ω 2

ab2

ω 2

¬ab2

◮ lfp ΦSvL,P9 = lm3Ł wc P9 = {ab2}, {e, l}. ◮ From {ab2}, {e, l} follows that she will not study late in the library.

Steffen H¨

  • lldobler

The Core Method 37

slide-38
SLIDE 38

Summary

◮ Under Łukasiewicz semantics we obtain

Byrne 1989 Program lm3Ł wc Pi(l) Modus Ponens l (96%) P4 ⊤ Alternative Arguments l (96%) P5 ⊤ Additional Arguments l (38%) P6 U Modus Ponens and DA ¬l (46%) P7 ⊥ Alternative Arguments and DA ¬l (4%) P8 U Additional Arguments and DA ¬l (63%) P9 ⊥

◮ The approach appears to be adequate. ◮ Fitting semantics or completion is inadequate.

Steffen H¨

  • lldobler

The Core Method 38