A Model-Invariant Tieory of Singular Causation J. Dmitri Gallow - - PDF document

a model invariant tieory of singular causation
SMART_READER_LITE
LIVE PREVIEW

A Model-Invariant Tieory of Singular Causation J. Dmitri Gallow - - PDF document

A Model-Invariant Tieory of Singular Causation J. Dmitri Gallow Counterfactual Causal Models 2. diagram is correct. are causally determined by each other. Ill assume throughout that the canonical model of a neuron default, fjring deviant),


slide-1
SLIDE 1

A Model-Invariant Tieory of Singular Causation

  • J. Dmitri Gallow

Causal & Explanatory Reasoning · Venice International University · November 13, 2017 1 Causal Models 1. We will represent causal determination structure with a causal model, or a structural equations model, Causal Models A causal model = (, ⃗ u,,,) is a 5-tuple of (a) A vector, = (U1,U2,...,UM ), of exogenous variables; (b) An assignment of values, ⃗ u = (u1, u2,..., uM ), to ; (c) A vector = (V1,V2,...,VN ), of endogenous variables; and (d) A vector = (ϕV1,ϕV2,...,ϕVN ) of structural equations, one for each Vi ∈ . (e) A specifjcation, , of which variable values are default and which are deviant.

B

: (A,C ) ⃗ u : (1,1) : (B, D, E ) :   E := B ∨ D D := C B := A ∧ ¬C   Figure 1: Preemptive Overdetermination. (For all variables, the value 0 is default, and the value 1 is deviant.) (a) Given a neuron diagram, let the canonical model be the one that has, for each neuron, a binary variable taking the value 1 if the neuron fjres and the value 0 if it doesn’t fjre (where not fjring is default, fjring deviant), and a true system of equations describing how the values of those variables are causally determined by each other. I’ll assume throughout that the canonical model of a neuron diagram is correct. 2. Given a causal model , and an assignment v of values to the variables in V, we can defjne a counterfactual model [V → v]. Counterfactual Causal Models Given a causal model = (, ⃗ u,,), including the variables V, and given the assignment

  • f values v to V, the counterfactual model [V → v] = ([V → v], ⃗

u[V → v],[V → v], [V → v], [V → v]) is the model such that: (a) [V → v] = ∪ V (b) ⃗ u[V → v] = ⃗ u ∪ v (c) [V → v] = − V (d) [V → v] = − (ϕVi | Vi ∈ V) (e) [V → v] = 1

slide-2
SLIDE 2

Figure 2: Symmetric Overdetermination a e Figure 3: Bogus Prevention 3. Using counterfactual models, we may provide a semantics for causal counterfactuals: Causal Counterfactuals In a causal model , containing the variables in V, the causal counterfactual V = v → ϕ is true ifg ϕ is true in the counterfactual model [V → v], |= V = v → ϕ ⇐⇒ [V → v] |= ϕ 2 The Counterfactual Counterfactual Theory 4. Tie majority of theories of causation formulated in terms of causal models say that causation is counter- factual dependence within some appropriate counterfactual model.1 The Counterfactual Counterfactual Theory In a causal model , C ’s taking on the value c, rather than c ∗, causes E to take on the value e, rather than e∗, ifg there is some suitable vector of variables O with suitable values o such that, if O is held fjxed at o, then both (I) C would have taken on the value c and E would have taken on the value e, [O → o] |= C = c ∧ E = e and (II) Had C taken on the value c ∗, E would have taken on the value e∗, [O → o] |= C = c ∗ → E = e∗ (a) Tiis theory gives us a 4-place causal relation: Cause(C = c,C = c ∗, E = e, E = e∗) From this, we may recover a familiar 2-place causal relation: Cause(C = c, E = e) ⇐⇒ ∃c ∗∃e∗Cause(C = c,C = c ∗, E = e, E = e∗) 5. In the case of Symmetric Overdetermination from fjgure 2, the counterfactual counterfactual account says that C ’s fjring caused E to fjre. However, the same system of equations, and the same values, may be used to model to the case of Bogus Prevention from fjgure 3. (a) Tie solution is to say (roughly) that, in order to be suitable, the variable values O = o must be at least as default as the values O take on in the actual model.2

1

Tie accounts of Hitchcock (2001), Halpern & Pearl (2001, 2005), Woodward (2003), Halpern (2008, 2016), and Weslake (forthcoming) all take this general form. Tie difgerences between them have to do with what makes the variable assignment O = o suitable.

2

For the problem, see Hiddleston (2005) and Hall (2007), and for a more careful statement of the solution in terms of default variable values, see Halpern (2008).

2

slide-3
SLIDE 3

B Figure 4: Symmetric Overdetermination B E Figure 5: Short Circuit 2.1 Model Variance 6. Ideally, a theory of singular causation would satisfy the following principle: Model Invariance Given any two causal models, and †, which both contain the variables C and E , if both and † are correct, then C = c caused E = e in ifg C = c caused E = e in †. 7. (a) In the canonical model of the neuron diagram from fjgure 4, 4, the counterfactual counterfactual account says that C ’s fjring caused E to fjre, since 4[A → 0] |= C = 1 ∧ E = 1, 4[A → 0] |= C = 0 → E = 0, and A = 0 is more default than the actual value A = 1. (b) However, if we remove the exogenous variable A from the model, getting the model −A

4 , which

contains the equation E := ¬B ∨C , then the counterfactual counterfactual account will say that C ’s fjring didn’t cause E to fjre. 8. In general, if = (, ⃗ u,,,) is a causal model with U ∈ , then let −U be the model that you get by: (a) Removing U from (b) Removing U ’s value from ⃗ u (c) Exogenizing any variables in whose only parent was U (d) Replacing U for its value in every structural equation in (e) Removing default information about U from . 9. If every equation in −U is surjective, then say that U is an inessential variable. Tien, we should endorse the following principle: Exogenous Reduction If a causal model = (, ⃗ u,,,) is correct, and U ∈ is inessential, then −U is also correct. (a) In the model 4, A is an inessential exogenous variable, and the counterfactual counterfactual ac- count tells us that C = 1 causes E = 1 in 4, but not in −A

4 .

(b) So we cannot accept Exogenous Reduction, Model Invariance, and the counterfactual counterfac- tual account.

  • 10. (a) In the canonical model of the neuron diagram from fjgure 5, 5, the counterfactual counterfactual

account says that C ’s fjring caused E to not fjre. For 5[B → 1] |= C = 1 ∧ E = 0, 5[B → 1] |= C = 0 → E = 1, and B = 1 is as default as the actual value (because it is the actual value). 3

slide-4
SLIDE 4

B

: (A,C ) ⃗ u : (1,1) : (B, E ) : E := B ∨ C B := A ∧ ¬C

  • Figure 6: Preemptive Overdetermination

(b) However, if we remove the endogenous variable B from the model—getting the model −B

5 , which

contains the equations E := C ∧ ¬D and D := C —then the counterfactual counterfactual account will say that C ’s fjring didn’t cause E to not fjre.

  • 11. In general, if = (, ⃗

u,,,) is a causal model with V ∈ , then let −V be the model that you get by: (a) Leaving alone (b) Leaving ⃗ u alone (c) Removing V from (d) Removing ϕV from , and replacing V with ϕV (PA(V )) wherever V appears on the right-hand-side

  • f an equation in 3

(e) Removing default information about V from

  • 12. (a) If V has a single parent and a single child, then say that V is an interpolated variable.

... P → V → C ... (b) If all the equations in −V are surjective, then say that V is inessential. (c) Tien, we should accept the following principle: Endogenous Reduction If a causal model = (, ⃗ u,,,) is correct, and V ∈ is an inessen- tial, interpolated variable, then −V is also correct. i. In the model 5, B is an inessential, interpolated variable, and the counterfactual counterfactual account tells us that C = 1 caused E = 0 in 5, but not in −B

5 .

  • ii. So we cannot accept Endogenous Reduction, Model Invariance, and the counterfactual coun-

terfactual account. 3 A Model Invariant Theory of Causation

  • 13. I will present a theory of causation in terms of structural equations models which is consistent with

Endogenous Reduction, Exogenous Reduction, and Model Invariance. (a) I’ll build up the theory by progressing through some familiar cases from the literature. 3.1 Preemptive Overdetermination

  • 14. (a) In the canonical model, 6, of Preemptive Overdetermination shown in fjgure 6, E = 1 does not

counterfactually depend upon C = 1.

3

PA(E ) are E ’s causal parents in the model—those variables which appear on the right-hand-side of E ’s structural equation ϕE .

4

slide-5
SLIDE 5

(b) However, if we just look at E ’s structural equation E := B ∨ C , and B and C ’s actual values, then E = 1 does counterfactually depend upon C = 1. Call this submodel of 5 the local model at E .

  • 15. In general, we can defjne the local model at E as follows.

Local Causal Model Given a causal model = (, ⃗ u,,,), with E ∈ , the local model at E , 〈 〈E 〉 〉, is the causal model in which (a) Tie exogenous variables are just the parents of E , PA(E ), in the original model ; (b) Tie exogenous variables PA(E ) are assigned the values they take on in ; (c) Tie sole endogenous variable is E ; (d) Tie sole structural equation is E ’s structural equation in , ϕE ; and (e) Tie defaults for E and PA(E ) are the same as in .

  • 16. Say that E = e locally counterfactually depends upon C = c ifg, in the local model at E , 〈

〈E 〉 〉, there’s some c ∗, e∗ such that 〈 〈E 〉 〉 |= C = c ∗ → E = e∗

  • 17. A (preliminary) proposal, then, is that either local or global counterfactual dependence suffjces for cau-

sation. (a) While this helps with the case of preemptive overdetermination in fjgure 6, it does nothing to help with the neuron diagram from fjgure 1. (b) It would be nice to handle that case by appealing to the transitivity of causation. (c) Unfortunately, there are a number of counterexamples to the transitivity of causation. 3.2 Counterexamples to Transitivity

  • 18. Sometimes, we can trace out of sequence of causal relations and conclude that the event at the start of

the chain caused the one at the end. If that’s so, then I’ll call the chain transitive. (a) Lewis thought that causal chains were always transitive, but this has unpalatable consequences. Chris smokes, contracts cancer, undergoes chemo, and survives. Tie smoking causes the cancer; the cancer causes the chemo; and the chemo causes the survival—so Lewis is forced to say that the smoking causes the survival. (b) Tie right thing to say is that causal chains are sometimes, but not always transitive. Tie diffjculty is working out just when.

  • 19. Tie plan: I’ll attempt to give conditions specifying when a directed path running from the variable V1

to the variable VN , V1 → V2 → V3 → ··· → VN permits the inference that V1 = v1 caused VN = vN . When it does, I’ll call the path a transitive path.

  • 20. One kind of counterexample to transitivity is illustrated by the neuron diagram in fjgure 7. C ’s fjring

caused B to fjre weakly (rather than strongly); B’s fjring weakly (rather than not) caused E to fjre. But C ’s fjring didn’t cause E to fjre.4 (a) Tie solution: require that the contrasts in our causal chain match up.5 5

slide-6
SLIDE 6

A B

(a)

A C B

(b)

Figure 7: Tampering (cf. Paul & Hall 2013). Tie octogonal neurons can either fjre weakly (light grey) or strongly (dark grey). If C fjres, this diminishes the strength with which B fjres. In fjgure 7(a), C ’s fjring caused B to fjre weakly. And B’s fjring weakly caused E to fjre. But C ’s fjring didn’t cause E to fjre. C

(a)

B E

(b)

Figure 8: In fjgure 8(a), C ’s failure to fjre causes B to fjre. B’s fjring causes E to fjre. But C ’s failure to fjre doesn’t cause E to fjre. In fjgure 8(b), C ’s fjring causes D to fjre. D’s fjring causes E to remain dormant. But C ’s fjring does not cause E to remain dormant.

  • 21. For two other counterexamples to transitivity, consider the neuron diagrams in fjgure 8.

(a) In both cases, either the start or the end of the causal chain involves a default variable value. (b) Tiis suggests the hypothesis: in order for a directed path to be a transitive path, the variable values at the start and end of that path must both be deviant.

  • 22. If we suppose that right actions and good states of afgairs are categorized as defaults,6 then this hypothesis

allows us to diagnose a whole family of counterexamples to transitivity categorized by Lewis (2004) as follows:7 (a) Tie bad (good) guys are set to win, when the good (bad) guys make one fjnal efgort to save (ruin) the day. (b) In response, the bad (good) guys rally. (c) Tie bad (good) guys emerge victorious. Tie good (bad) guy’s last stand causes the bad (good) guys to rally. Tie bad (good) guys’ rally causes their

  • victory. But it’s not the case that the good (bad) guy’s last stand causes the bad (good) guys to win.
  • 23. In general, this will be our account of which a directed path in a causal model is transitive:

Transitive Path In a causal model , a directed path running from V1 to VN V1 → V2 → V3 → ··· → VN

4

  • Cf. McDermott (1995)’s Dog Bite example and the counterexamples to transitivity discussed in Paul (2004).

5

  • cf. Schaffer (2005).

6

See Hitchcock & Knobe (2009).

7

See, for instance, McDermott (1995)’s case Shock C.

6

slide-7
SLIDE 7

E Figure 9: Prevention C Figure 10: Omission is a transitive path ifg: (a) For each variable Vi along this path, there is a pair (vi, v∗

i ) of Vi’s actual value vi in ,

and a contrast value v∗

i ,

(v1, v∗

1) → (v2, v∗ 2) → (v3, v∗ 3) → ··· → (vN , v∗ N )

such that: for all j between 1 and N − 1, Vj’s taking on the value vj, rather than v∗

j ,

caused Vj+1 to take on the value vj+1, rather than v∗

j+1;

(b) Both V1’s and VN ’s actual values are deviant; and (c) For any colliders along the path, Vk: V1’s taking on the value v1, rather than v∗

1, caused

Vk to take on the value vk, rather than v∗

k.8

3.3 Preemptive Prevention

  • 24. So far, we’ve only looked at causal relations where both the cause and efgect variables take on deviant
  • values. But default variable values can also be causes and efgects.

(a) Because counterfactual dependence suffjces for causation, cases of prevention (fjgure 9) and omission (fjgure 10) involve default efgects and causes, respectively.

  • 25. When C = c and E = e were deviant variable values, we said that local counterfactual dependence was

also suffjcient for causation. Should we say the same thing when C = c or E = e is a default variable value? (a) Tiis question turns out to be closely related to cases of Preemptive Prevention like the one shown in fjgure 11(a). If we say that local counterfactual dependence suffjces for causation, then, in the canonical model of the neuron diagram in fjgure 11(a), we will say that C ’s fjring caused E to not fjre. (b) However, we would not be able to say the same thing about the neuron diagram in fjgure 11(b). For, in the canonical model of that neuron diagram, E = 0 does not locally counterfactually depend upon C = 1, since C isn’t even in the local model at E . Moreover, since E ’s remaining dormant is a default state of that neuron, we would not be able to appeal to the transitivity condition to say that C ’s fjring prevented E from fjring. (c) So, we should say that, in the case where the cause or efgect variable value is default, local counterfac- tual dependence is not suffjcient for causation, and therefore, in the cases of preemptive prevention shown in fjgure 11, C ’s fjring doesn’t prevent E from fjring all by itself.9

8

I won’t have the space to explain the reasons for this condition, but the curious should consult the case of short circuit from Hall (2007).

9

We can still say that the disjunction of A’s fjring and C ’s fjring caused E to remain dormant.

7

slide-8
SLIDE 8

B E

(a)

B E D

(b)

B E

(c)

Figure 11: Tiree cases of Preemptive Prevention

  • 26. We can further support this treatment of Preemptive Prevention by noting that, if we want a model-

invariant account of causation, then we are forced to say, in fjgure 11(c), that C ’s fjring prevented E from fjring ifg D’s fjring also prevented E from fjring. (a) Beginning with the canonical causal model of fjgure 11(c), Exogenous Reduction allows us to remove the inessential exogenous variable A from our model. Tien, Endogenous Reduction allows us to remove the inessential interpolated variable B. We end up with a causal model containing the sole structural equation E := ¬C ∧ ¬D. But this equation treats C and D symmetrically, and both C and D take on the same value. So, any account of causation will say that, in this model, C = 1 caused E = 0 ifg D = 1 caused E = 0. Since D = 1 clearly did not cause E = 1, any model-invariant account of causation should say that C = 1 didn’t caused E = 0 either. 3.4 The Model Invariant Account in Summary The Model-Invariant Theory of Causation In a causal model , C ’s taking on the value c, rather than c ∗, caused E to take on the value e, rather than e∗, ifg either (Def) or (Dev). (Def) Either c or e is a default value, and, had C taken on the value c ∗, E would have taken

  • n the value e∗,

|= C = c ∗ → E = e∗ (Dev) Both c and e are deviant values, and either

  • i. Had C taken on the value c ∗, E would have taken on the value e∗

|= C = c ∗ → E = e∗

  • r
  • ii. In the local model 〈

〈E 〉 〉, had C taken on the value c ∗, E would have taken on the value e∗, 〈 〈E 〉 〉 |= C = c ∗ → E = e∗

  • r
  • iii. In , there is a transitive path running from C to E .
  • 27. Tiis account is consistent with Model Invariance, Exogenous Reduction, and Endogenous Reduc-
  • tion. Suppose that we have a correct model = (, ⃗

u,,,), with U ∈ and V ∈ . And suppose that neither U nor V are C or E , U is inessential, and V is inessential and interpolated. Tien: (a) If C = c caused E = e in , then C = c caused E = e in −U ; 8

slide-9
SLIDE 9

(b) If C = c caused E = e in , then C = c caused E = e in −V ; (c) If C = c didn’t cause E = e in , then C = c didn’t cause E = e in −U ; and (d) If C = c didn’t cause E = e in , then C = c didn’t cause E = e in −V .

References Collins, John, Ned Hall & L. A. Paul, editors. 2004. Causation and Counterfactuals. Tie MIT Press, Cambridge,

  • ma. [9]

Hall, Ned. 2007. “Structural Equations and Causation.” Philosophical Studies, vol. 132 (1): 109–136. [2], [7] Halpern, Joseph Y. 2008. “Defaults and Normality in Causal Structures.” Proceedings of the Eleventh International Conference on Principles of Knowledge Representation and Reasoning, 198–208. [2] —. 2016. Actual Causality. MIT Press, Cambridge, ma. [2] Halpern, Joseph Y. & Christopher Hitchcock. 2010. “Actual Causation and the Art of Modeling.” In Heuristics, Probability and Causality: A Tribute to Judea Pearl, Rina Dechter, Hechtor Geffner & Joseph Y. Halpern, editors, 383–406. College Publications. —. 2015. “Graded Causation and Defaults.” Tie British Journal for the Philosophy of Science, vol. 66 (2): 413–457. Halpern, Joseph Y. & Judea Pearl. 2001. “Causes and Explanations: A Structural-Model Approach. Part 1: Causes.” In Proceedings of the Seventeeth Conference on Uncertainty in Artifjcial Intelligence, John Breese & Daphne Koller, editors, 194–202. Morgan Kaufman, San Francisco. [2] —. 2005. “Causes and Explanations: A Structural-Model Approach. Part 1: Causes.” Tie British Journal for the Philosophy of Science, vol. 56: 843–887. [2] Hiddleston, Eric. 2005. “Causal Powers.” Tie British Journal for the Philosophy of Science, vol. 56: 27–59. [2] Hitchcock, Christopher. 2001. “Tie Intransitivity of Causation Revealed in Equations and Graphs.” Tie Journal

  • f Philosophy, vol. 98 (6): 273–299. [2]

—. 2007. “Prevention, Preemption, and the Principle of Suffjcient Reason.” Philosophical Review, vol. 116 (4): 495–532. Hitchcock, Christopher & Joshua Knobe. 2009. “Cause and Norm.” Journal of Philosophy, vol. 106 (11): 587–612. [6] Lewis, David K. 2000. “Causation as Infmuence.” Tie Journal of Philosophy, vol. 97 (4): 182–197. Reprinted in Collins et al. (2004, pp. 75–106). [5] —. 2004. “Causation as Infmuence.” In Collins et al. (2004), chap. 3, 75–106. [5], [6] McDermott, Michael. 1995. “Redundant Causation.” Tie British Journal for the Philosophy of Science, vol. 46 (4): 523–544. [6] Paul, L. A. 2004. “Aspect Causation.” In Collins et al. (2004). [6] Paul, L. A. & Ned Hall. 2013. Causation: A User’s Guide. Oxford University Press, Oxford. [6] Schaffer, Jonathan. 2005. “Contrastive Causation.” Tie Philosophical Review, vol. 114 (3): 297–328. [6] Weslake, Brad. forthcoming. “A Partial Tieory of Actual Causation.” Tie British Journal for the Philosophy of Science. [2] Woodward, James. 2003. Making Tiings Happen: A Tieory of Causal Explanation. Oxford University Press, Oxford. [2]

9