Formalization and Verification of Fault Tolerance and Security - - PowerPoint PPT Presentation

formalization and verification of fault tolerance and
SMART_READER_LITE
LIVE PREVIEW

Formalization and Verification of Fault Tolerance and Security - - PowerPoint PPT Presentation

1/35 Formalization and Verification of Fault Tolerance and Security Felix G artner TU Darmstadt, Germany fcg@acm.org Example: Space Shuttle STS51 Discovery, http://spaceflight.nasa.gov/ 2/35 3/35 Fault-tolerant Operation [Spector and


slide-1
SLIDE 1

1/35

Formalization and Verification

  • f Fault Tolerance and Security

Felix G¨ artner

TU Darmstadt, Germany fcg@acm.org

slide-2
SLIDE 2

2/35

Example: Space Shuttle

STS51 Discovery, http://spaceflight.nasa.gov/

slide-3
SLIDE 3

3/35

Fault-tolerant Operation [Spector and Gifford 1984]

  • Five redundant general purpose computers.
  • Four of them run the avionics software in parallel.
  • Majority vote of computation results.
  • “Fail-operational, fail-safe.”
  • Fifth computer runs backup system (written by

separate contractor). Primary contractor: IBM.

slide-4
SLIDE 4

4/35

Critical Infrastructures

http://www.cs.virginia.edu/~survive

  • Critical infrastructures must be dependable

(in this talk meaning fault-tolerant and secure).

slide-5
SLIDE 5

5/35

Personal Motivation

  • My . . .

– background: fault-tolerance, formal methods. – experience: formal methods help find bugs. – concern: need to formalize issues first (state what we mean). – claim: we know how to do this in fault-tolerance, not so much in security.

slide-6
SLIDE 6

6/35

Overview

  • 1. Fault tolerance (60% of talk).
  • What does “fault tolerance” mean?
  • How can it be formalized and verified?
  • 2. Security (30%).
  • What does “security” mean and how can it be

formalized???

slide-7
SLIDE 7

7/35

Informal View of Fault Tolerance

  • Definition: Maintain some form of correct behavior in

the presence of faults.

  • Correct behavior: specification.
  • Faults:

– memory perturbation (cosmic rays), – link failure (construction works), – node crash (power outage), – . . .

slide-8
SLIDE 8

8/35

Formal View of Fault Tolerance

  • System: state machine/event system with interface.
  • Specification: look at functional properties defined on

individual executions of the system.

  • Safety properties: “always . . . ”.
  • Liveness properties: “eventually . . . ”.
  • Abstract away from real time.
slide-9
SLIDE 9

9/35

Safety and Liveness

  • Safety properties: observable in finite time.
  • Examples: mutual exclusion, partial correctness.
  • Liveness property: violated after infinite time.
  • Example: starvation freedom, termination.
  • Safety and liveness are fundamental [Alpern and

Schneider 1985; G¨ artner 1999a].

slide-10
SLIDE 10

10/35

  • Faults. . .
  • can be modelled as unexpected events [Cristian 1985].
  • are tied to one level of abstraction

[Liu and Joseph 1992].

  • Adding and “removing” state transitions is enough

[G¨ artner 2001a].

  • are formalized as a fault assumption.
slide-11
SLIDE 11

11/35

Fault Tolerance Example

  • Network of workstations with point-to-point links.
  • Fault assumption: links and workstations can crash,

but network stays connnected.

  • We want to do reliable broadcast.
  • Specification (desired properties):

– A message which is delivered was previously broadcast (safety). – A broadcast message is eventually delivered on all surviving machines (liveness).

slide-12
SLIDE 12

12/35

Fault on one Level of Abstraction

  • System = composition of systems.

system subsystem subsystem interface interaction

slide-13
SLIDE 13

13/35

Local and Global Fault Assumptions

  • Local fault assumption: add behavior to fault regions.
  • Example: node crash allows processes to stop.
  • Global fault assumption: restrict behavior again.
  • Example: network stays connected.
slide-14
SLIDE 14

14/35

Fault Assumptions as Transformations [G¨ artner 1998]

ideal environment system A environment faulty system A′ transformation program fault assumption ideal problem specification S fault-tolerance specification S′ specification transformation

slide-15
SLIDE 15

15/35

Verification

ideal environment system A environment faulty system A′ transformation program fault assumption ideal problem specification S fault-tolerance specification S′ specification transformation

correctness correctness

slide-16
SLIDE 16

16/35

Usual Verification of Fault Tolerance

  • 1. Choose fault assumption.
  • 2. Weaken specification (if needed).
  • 3. Transform system.
  • 4. Verify system.
slide-17
SLIDE 17

17/35

Transformational Approach [G¨ artner 1999b]

  • 1. Choose fault assumption.
  • 2. Weaken specification (if needed).
  • 3. Prove that original system satisfies specification.
  • 4. Transform system.
  • 5. Prove only items which have changed (use tools like

VSE, PVS, . . . ).

slide-18
SLIDE 18

18/35

Potential of Re-Use

ideal environment system A environment faulty system A′ transformation program fault assumption ideal problem specification S fault-tolerance specification S′ specification transformation

correctness correctness

slide-19
SLIDE 19

19/35

Case Study [Mantel and G¨ artner 2000]

  • Example: reliable broadcast.
  • Proved safety part using industrial strength verification

tool VSE [Hutter et al. 1996].

  • Transformational approach applied.
  • Benefit: re-use of specification and proofs.
slide-20
SLIDE 20

20/35

Re-use of Specification

Actions ActionList SafetyProperties ProcessList MessageSets Messages AdmissibleTraces Broadcast Traces States ChannelMatrix ChannelList UChannel Processes UpDownList CrashActions CrashAction- List CrashStates CrashTraces Properties CrashSafety- Traces CrashAdmissible- ReliableBroadcast

t h e

  • r

i e s a f f e c t e d b y t r a n s f

  • r

m a t i

  • n
slide-21
SLIDE 21

21/35

Re-use of Proofs

B5 B4 B3 B2 B1’ B1 crash

slide-22
SLIDE 22

22/35

Fault Tolerance Summary

  • We basically know how to deal with fault tolerance.
  • Formalizations and verification methods are quite

mature.

  • Area has a solid formal foundation.
slide-23
SLIDE 23

23/35

Fault Tolerance and Security

  • Can research in security benefit from fault tolerance?

“Fault tolerance and security are instances of a more general class of property that constrains influence.” Franklin Webber, BBN (during SRDS2000 panel)

  • Example: tolerate malicious behavior by assuming

Byzantine faults (like in ISS).

slide-24
SLIDE 24

24/35

Informal View of Security

  • Security is CIA [Laprie 1992]:

– Confidentiality: non-occurrence of unauthorized disclosure of information. – Integrity: non-occurrence of inadequate information alterations. – Availability: readiness for usage.

  • Conjecture: Everything is CIA! [Cachin et al. 2000]
slide-25
SLIDE 25

25/35

Formal View of Security

  • Recall concepts of safety and liveness (from fault

tolerance).

  • We can model a lot of notions from security with these

concepts, but not all.

  • Benefits:

– Well understood formalisms. – Good proof methodologies and tool support.

slide-26
SLIDE 26

26/35

Safety and Liveness in Security

  • Access control is safety [Schneider 2000; ?].
  • Aspects of confidentiality are safety

[Gray, III. and McLean 1995].

  • Aspects of integrity are safety,

e.g. “no unauthorized change of a variable”.

  • Aspects of availability are liveness,

e.g. “eventual reply to a request”.

slide-27
SLIDE 27

27/35

Fair Exchange [Asokan et al. 1997]

  • Two participants A and B with electronic items.
  • How to exchange the items in a fair manner? Formally:

– Effectiveness: if exchange succeeds then items matched the expectation and both participants have well behaved (safety). – Termination: eventually the protocol will terminate with success or abort (liveness). – Fairness: in case of an unsuccessful exchange, ange, nobody wins or loses something valuable.

slide-28
SLIDE 28

28/35

Formalizing Fair Exchange [G¨ artner 2001b]

mA iB mB iA dA dB sB eB sA eA A input item description malevolence

  • utput item

success/abort B

x, x, x, . . . Y, . . . , Y, X, X, . . . x, Y, . . . , x, Y, x, X, x, X, . . . z, z, z, . . . z, Y, . . . , z, Y, z, X, z, X, . . .

slide-29
SLIDE 29

29/35

Higher Level Properties

  • Consequence: Restriction of information flow is neither

safety nor liveness.

  • Property of the type: if trace x, X, x, X is possible,

then trace z, X, z, X must be possible too.

  • Usually formalized as closure conditions on trace sets:

σ ∈ S ⇒ f(σ) ∈ S

  • Properties of properties, sets of sets of traces.
slide-30
SLIDE 30

30/35

Original Approach

  • Non-interference [Goguen and Meseguer 1982].
  • Descendants with their own problems [McLean 1994]:

– Generalized non-interference. – Restrictiveness. – Non-inference. – . . .

  • Possibilistic properties.
slide-31
SLIDE 31

31/35

Possibilistic Properties

  • Pure non-interference is too strong.
  • There is progress in weakening the definition to make it

practical [Mantel 2000].

  • First results available [Focardi et al. 1997].
  • To be investigated: relation to other ways to specify

security [Pfitzmann et al. 2000].

slide-32
SLIDE 32

32/35

Motivation Reminder

  • Formal methods are no silver bullet, but they help to

find bugs in critical systems.

  • Starting point: formalization of central concepts.
  • We know how to do that in fault tolerance.
  • But fault tolerance seems “easy” compared to security.
  • Security defines a new class of properties.
slide-33
SLIDE 33

33/35

Historic Perspective

“The first wave of attacks is physical [e.g. cut wires]. But these problems we basically know how to solve.” → fault tolerance The second wave is syntactic [e.g. exploiting vulnerabilities]. We have a bad track record in protecting against syntactic attacks. But at least we know what the problem is. → security models Bruce Schneier (Inside Risks, Dec. 2000)

slide-34
SLIDE 34

34/35

Conclusions 1/2

  • We seem to have managed dealing with physical

attacks.

  • Currently trying to cope with syntactic ones.
  • We need a thorough understanding of the concepts

involved.

  • Formal methods can support rigorous anaysis.
  • Formalization is the first step.
slide-35
SLIDE 35

35/35

Conclusions 2/2

  • We’ve come a long way in formal analysis.
  • Milestones: safety, liveness, (linear) temporal logic for

modeling functional (trace set) properties.

  • Shifting to more difficult properties: security,

possibilistic properties.

  • Open issue: Is this formalization adequate/useful?
  • What about semantic attacks (e.g. stock market

hoaxes)?

slide-36
SLIDE 36

36/35

Acknowledgments

  • Slides produced using pdfL

A

T EX and Klaus Guntermann’s PPower4.

References Alpern, B. and Schneider, F. B. 1985. Defining liveness. Information Processing Letters 21, 181–185. Asokan, N., Schunter, M., and Waidner, M. 1997. Optimistic protocols for fair exchange. In T. Matsumoto Ed., 4th ACM Conference

  • n Computer and Communications Security (Zurich, Switzerland, April

1997), pp. 8–17. ACM Press. Cachin, C., Camenisch, J., Dacier, M., Deswarte, Y., Dobson, J., Horne, D., Kursawe, K., Laprie, J.-C., Lebraud, J.-C., Long,

slide-37
SLIDE 37

37/35

D., McCutcheon, T., M¨ uller, J., Petzold, F., Pfitzmann, B., Powell, D., Randell, B., Schunter, M., Shoup, V., Ver´ ıssimo, P., Trouessin, G., Stroud, R. J., Waidner, M., and Welch,

  • I. S.

2000. Reference model and use cases. Deliverable D1 of the MAFTIA project [MAFTIA ]. Cristian, F. 1985. A rigorous approach to fault-tolerant programming. IEEE Transactions on Software Engineering 11, 1 (Jan.), 23–31. Focardi, R., Ghelli, A., and Gorrieri, R. 1997. Using non interference for the analysis of security protocols. In Proceedings of DIMACS Workshop on Design and Formal Verification of Security Protocols (DIMACS Center, Rutgers University, Sept. 1997). G¨ artner, F. C. 1998. Specifications for fault tolerance: A comedy of

  • failures. Technical Report TUD-BS-1998-03 (Oct.), Darmstadt University
  • f Technology, Darmstadt, Germany.

G¨ artner, F. C. 1999a. Fundamentals of fault-tolerant distributed computing in asynchronous environments. ACM Computing Surveys 31, 1

slide-38
SLIDE 38

38/35

(March), 1–26. G¨ artner, F. C. 1999b. Transformational approaches to the specification and verification of fault-tolerant systems: Formal background and

  • classification. Journal of Universal Computer Science (J.UCS) 5, 10 (Oct.),

668–692. Special Issue on Dependability Evaluation and Assessment. G¨ artner, F. C. 2001a. Formale Grundlagen der Fehlertoleranz in verteilten Systemen. Ph. D. thesis, Fachbereich Informatik, TU Darmstadt. forthcoming. G¨ artner, F. C. 2001b. Formalizing fairness in electronic commerce using possibilistic security properties. Technical report, Darmstadt University of Technology, Department of Computer Science. to appear. Goguen, J. A. and Meseguer, J. 1982. Security policies and security

  • models. In Proceedings of the 1982 Symposium on Security and Privacy

(SSP ’82) (Los Alamitos, Ca., USA, April 1982), pp. 11–20. IEEE Computer Society Press. Gray, III., J. W. and McLean, J. 1995. Using temporal logic to

slide-39
SLIDE 39

39/35

specify and verify cryptographic protocols. In Proceedings of the Eighth Computer Security Foundations Workshop (CSFW ’95) (Washington - Brussels - Tokyo, June 1995), pp. 108–117. IEEE. Hutter, D., Langenstein, B., Sengler, C., Siekmann, J. H., Stephan, W., and Wolpers, A. 1996. Verification support environment (VSE). High Integrity Systems 1, 6, 523–530. Laprie, J.-C. Ed. 1992. Dependability: Basic concepts and Terminology, Volume 5 of Dependable Computing and Fault-Tolerant Systems. Springer-Verlag. Liu, Z. and Joseph, M. 1992. Transformation of programs for fault-tolerance. Formal Aspects of Computing 4, 5, 442–469.

  • MAFTIA. Maftia home – malicious- and accidental-fault tolerance for internet
  • applications. Internet:

http://www.newcastle.research.ec.org/maftia/. Mantel, H. 2000. Possibilistic definitions of security - an assembly kit. In Proceedings of the 13th IEEE Computer Security Foundations Workshop,

slide-40
SLIDE 40

40/35

(Cambridge, England, July 2000). IEEE Computer Society Press. Mantel, H. and G¨ artner, F. C. 2000. A case study in the mechanical verification of fault tolerance. Journal of Experimental & Theoretical Artificial Intelligence 12, 4 (Oct.). to appear. McLean, J. 1994. Security models. In J. Marciniak Ed., Encyclopedia

  • f Software Engineering. John Wiley & Sons.

Pfitzmann, B., Schunter, M., and Waidner, M. 2000. Secure reactive systems. Research Report RZ 3206 (#93252) (Feb.), IBM Research. Schneider, F. B. 2000. Enforceable security policies. ACM Transactions

  • n Information and System Security 3, 1 (Feb.), 30–50.

Spector, A. and Gifford, D. 1984. The space shuttle primary computer system. Communications of the ACM 27, 9, 874–900.

slide-41
SLIDE 41

41/35

Abstract

It is often argued that fault tolerance and security are similar properties and can be achieved by similar means. In this talk I will first give an overview of methods used to formalize fault tolerance, especially those aimed at verification and validation of fault-tolerant systems, and briefly present a case study in which these methods have been successfully

  • applied. In the remaining part of the talk, I will sketch different ways how security properties have been formalized and

how experience from fault tolerance can help in the clarification of the issues involved. It turns out that while some aspects of security are in fact closely related to fault tolerance, other aspects (like confidentiality) are fundamentally different in nature. To initiate discussion, I will speculate on promising ways of how to deal with these issues from a practicioner’s point fo view.

slide-42
SLIDE 42

42/35

Appendix: Proof that Fairness is not a Trace Set Property

F

assumed trace set absence of info flow

A

set of traces is fair all unsuccessful

U

restriction of A to all traces that give away the item