Directed Model Checking (not only) for Timed Automata
Sebastian Kupferschmid March, 2010
Directed Model Checking (not only) for Timed Automata Sebastian - - PowerPoint PPT Presentation
Directed Model Checking (not only) for Timed Automata Sebastian Kupferschmid March, 2010 Model Checking Motivation Embedded Systems Omnipresent Safety relevant systems Pentium bug Ariane 5 Errors can be extremely harmful Correct
Sebastian Kupferschmid March, 2010
Motivation
Embedded Systems
Omnipresent Safety relevant systems
Pentium bug Ariane 5
Errors can be extremely harmful Correct functioning is absolutely mandatory
Correct Systems
Every system state satisfies invariant
M, s0 | = ∀ϕ full state space
Erroneous Systems
Find error states fast Short error traces
M, s0 | = ∃♦¬ϕ
Directed Model Checking
Combination of Artificial Intelligence and Model Checking Accelerate the search to error states with heuristic functions
Introduction
Timed Automata Directed Model Checking
Coming up with Heuristics in a Principled Way
Pattern Database Heuristics Pattern selection strategies
Summary
Empirical evaluation of several heuristics Literature
Syntax
Definition (Timed Automaton)
A timed automaton A is a tuple L, l0, E, X, V, Σ, I, where L finite set of locations, l0 ∈ L the initial location, X finite set of clocks, V finite set of integer variables, Σ synchronization symbols, E finite set of edges, and I assigns invariants to locations.
s0 s1 x ≤ 1 s2 x ≤ 1 c? x := 0 c? x < 1 x ≥ 1
Semantics
Semantics
States assign values to
Automata, Integer variables, and Clocks
Transitions
Discrete Delay
infinite transition system
A possible Behavior
s0 s1 x ≤ 1 s2 x ≤ 1 c? x := 0 c? x < 1 x ≥ 1 time x 1 2 3 1 s0 s1 s2 s0
Symbolic State Space
The Zone Graph
Finite & exact abstraction of the timed automata semantics A symbolic state corresponds to a set of states that have the same discrete part and the clock values satisfy a conjunction
s0 s1 x ≤ 1 s2 x ≤ 1 c? x := 0 c? x < 1 x ≥ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . s0, x = 0 s0, x ≥ 0 s0, x < 1 s1, x = 0 s2, x < 1 s0, x ≥ 1 s0, x = 1 s1, x ≤ 1 s2, x ≤ 1 s0, x ≤ 1
Definition (Model Checking Task)
A model checking task T is a tuple M, ϕ, where M = A1 . . . An is a system of timed automata ϕ is an error formula
Objective in DMC
Given: a model checking task T = M, ϕ with corresponding symbolic state space S(M) = S, s0, T Find: a sequence π = s0
t1
− → s1
t2
− → . . . sn−1
tn
− → sn, where si ∈ S, si
ti
− → si+1 ∈ T, and sn | = ϕ Approach: informed search algorithm heuristic function
Model Checking + Heuristic Search
Definition (heuristic function)
Let T = M, ϕ be a model checking task and let S(M) = S, s0, T be the state space of M. A heuristic function (or heuristic) is a function h : S → N0 ∪ {∞}. The heuristic estimate h(s) for a state s ∈ S is supposed to estimate the distance from s to the nearest error state.
The General Idea
init error d i s t a n c e e s t i m a t e distance estimate distance estimate d i s t a n c e e s t i m a t e
Definition (perfect heuristic)
Let T = M, ϕ and let S(M) = S, s0, T. The perfect heuristic
length of a shortest path from s to any error state. Note: h∗(s) = ∞ iff no error state is reachable from s. Heuristic h is called admissible if h(s) ≤ h∗(s) for all states s ∈ S safe if h∗(s) = ∞ for all s ∈ S with h(s) = ∞ goal-aware if h(s) = 0 for all error states s ∈ S consistent if h(s) ≤ h(s′) + 1 for all nodes s, s′ ∈ S
1 function dmc(M, ϕ, h): 2
3 closed = ∅ 4
5 while open = ∅ do: 6 s = open.getMinimum() 7 if s | = ϕ then: 8 return True 9 if s ∈ closed then: 10 closed = closed ∪ {s} 11 for each s′ ∈ succs(s) do: 12
13 return False
A∗ Search
priority(s, h) = depth(s) + h(s) If h is admissible shortest possible error traces Often high memory consumption
Greedy Search
priority(s, h) = h(s) Expands fewer states than A∗ in practice No guarantee on error trace length
Definition (Dominance)
Let h, h′ be two admissible heuristics. The heuristic h dominates h′ iff ∀s ∈ S : h(s) > h′(s)
Theorem
Let h, h′ be two admissible heuristics. If h dominates h′, then every state explored by A∗ with h is also explored by A∗ with h′.
Requirements for h
“The closer the better” It has to work well in practice
Heuristic has to be computed for every encountered state Efficient = low-order polynomial in T
Based on the declarative description of T No user interaction
Hamming Distance Heuristic
The minimal number of variable values that have to be changed in
h(s) = min
e∈S:e| =ϕ #different values(s, e)
Intuition
The more similar to an error state the closer to an error state.
What is wrong with the Hamming distance heuristic? Quite uninformative: the range of heuristic values is small; typically, most successors have the same estimate Sensitive to reformulation: can easily transform any MC task into an equivalent one where h(s) = 1 for all non-error states (how?) Ignores almost all problem structure: heuristic values do not depend on the set of transitions! need a better, principled way of coming up with heuristics
In this Lecture: Pattern Database Heuristics
State-of-the-art heuristics Based on abstractions Fully automatically generated No user interaction Applicable to a wide range of transition systems
The General Idea
Given
A model checking task T = M, ϕ with Corresponding state space S(M) = S, s0, T
A Generic Approach for Obtaining Heuristics
Select an overapproximation T α of T with T α = Mα, ϕα and S(Mα) = Sα, sα
0 , T α
For every state s ∈ S encountered during the search
Find a (shortest) error trace π in Sα, sα, T α h(s) = |π|
The General Idea
Original Transition System Overapproximation
The General Idea
Original Transition System Overapproximation
s sα h(s) = 2
Prior to Search
Choose an abstraction α For every abstract state sα ∈ S(Mα) = Sα, sα
0 , T α
Compute abstract error distance distα(sα) Store sα, distα(sα) in lookup table (the pattern database)
During Search
Map state s to corresponding abstract state sα Heuristic value: h(s) = d(sα)
The Original State Space
The Trivial Abstraction
The Identity Abstraction
The Perfect Abstraction
Requirements for the Heuristic
Informativeness (quality) Has to work well in practice
Requirements for the Abstraction
Efficient to compute Not too many abstract states Succinct representation (memory requirement) Question: where is the sweet-spot?
Predicate Abstraction
Abstract state space defined by a set of selected predicates Use SAT or SMT to construct abstract state space Fine-grained
Variable Abstraction
Special case of predicate abstraction Ignores subset of the system’s variables Abstract model in same formalism (can be constructed with the same tool, often more efficient than general purpose SAT solvers)
What kind of pattern shall we use?
Definition (Pattern)
A pattern is a set of variables/predicates used to define a system.
In this Lecture
Cone-of-influence-based pattern selection Pattern selection using counterexamples Syntax-based pattern selection A local search approach
Pattern P
Subset of the variables that are used to define the system
Abstraction of M with respect to P = {P, y, c, g}
M = P Q
left P walk right c! g? x := 0 x > 2 red Q yellow y ≤ 1 green c? y := 0 g! y ≥ 2
Mα = P α
left P α walk right c! g? g? x := 0 x > 2 red yellow y ≤ 1 green c? y := 0 g! y ≥ 2
But: P = {P, y, c, g} does not induce an overapproximation! Why . . .
left walk right c! g? x > 2 g? x := 0 red yellow y ≤ 1 green c? y := 0 g! y ≥ 2 left walk right c! g? x > 2 g? x := 0 red yellow y ≤ 1 green c? y := 0 g! y ≥ 2 Q P P α Qα
But: P = {P, y, c, g} does not induce an overapproximation! Why . . .
left walk right c! g? x > 2 g? x := 0 red yellow y ≤ 1 green c? y := 0 g! y ≥ 2 left walk right c! g? x > 2 g? x := 0 red yellow y ≤ 1 green c? y := 0 g! y ≥ 2 Q P P α Qα
. . . because P α = walk is not reachable (synchronization)
Definition (closed pattern)
A pattern P is closed iff {b | ∃a ∈ P : a depends on b} ⊆ P
Consequences
Closed patterns overapproximation Overapproximation: all error paths are preserved Overapproximation admissible heuristics Note: every abstraction set can be closed
A COI-based Method
Given: a model checking task T = M, ϕ and a bound b ∈ N0. Return: P = b
i=0 Pi
Where: P0 = vars(ϕ) Pi+1 = {v | ∃v′ ∈ Pi : v can influence v′}
Cone of Influence
Let T = M, P = right be a model checking task and let b = 1, M = P Q
left P walk right c! g? x := 0 x > 2 red Q yellow y ≤ 1 green c? y := 0 g! y ≥ 2
Cone of Influence
P c g Q y x
Cone of Influence
Let T = M, P = right be a model checking task and let b = 1, M = P Q
left P walk right c! g? x := 0 x > 2 red Q yellow y ≤ 1 green c? y := 0 g! y ≥ 2
Cone of Influence
P c g Q y x P =
then P = {P, c, g, x}.
Concluding Comments
User interaction (bound b) Small values of b uninformed heuristic Larger values of b quickly converges towards original system Can be difficult to select good values for b
The Method
Use the monotonicity abstraction Compute abstract error trace for the abstract MC problem Relevant variables: all variables that occur in the declarative description of the transitions that are involved in the abstract error trace
Pattern P
Contains all relevant variables No user interaction
Adaptation of “Ignoring negative Effects”
Idea
Abstract variables are set-valued A variable, once it obtained a value keeps that value forever
Variables in the Abstraction
v ∈ dom(v) v+ ⊆ dom(v) v := w v+ := v+ ∪ w+
Clocks in the Abstraction
Trivialize very fast ignored in the abstraction
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0}
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0} P += {l1, l2, l3}, v+ = {0, 1}, w+= {0, 3} l1 → l2 l1 → l3
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0} P += {l1, l2, l3}, v+ = {0, 1}, w+= {0, 3} P += {l1, l2, l3}, v+ = {0, 1, 2}, w+= {0, 3} l1 → l2 l1 → l2 l1 → l3 l1 → l3
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0} P += {l1, l2, l3}, v+ = {0, 1}, w+= {0, 3} P += {l1, l2, l3}, v+ = {0, 1, 2}, w+= {0, 3} l1 → l2 l1 → l2 l1 → l3 l1 → l3
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0} P += {l1, l2, l3}, v+ = {0, 1}, w+= {0, 3} P += {l1, l2, l3}, v+ = {0, 1, 2}, w+= {0, 3} l1 → l2 l1 → l2 l1 → l3 l1 → l3
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0} P += {l1, l2, l3}, v+ = {0, 1}, w+= {0, 3} P += {l1, l2, l3}, v+ = {0, 1, 2}, w+= {0, 3} l1 → l2 l1 → l2 l1 → l3 l1 → l3
Computing Abstract Error Traces
l1 P l2 l3 v ≤ 0 v := v + 1 w : = 3
Initial state: P = l1, v = 0, w = 0 Error formula: ϕ = (v = 2)
Computation of Abstract Error Traces
P += {l1}, v+ = {0}, w+= {0} P += {l1, l2, l3}, v+ = {0, 1}, w+= {0, 3} P += {l1, l2, l3}, v+ = {0, 1, 2}, w+= {0, 3} l1 → l2 l1 → l2 l1 → l3 l1 → l3
l1
w:=3
− − − → l3 does not occur in the abstract error trace w ∈ P
Where does it work, where not
Works well for modular systems with little interaction (many real-world applications) Problems with systems with tight interaction identity abstraction
Abstract State Space
P = {p1, . . . , pn} set of predicates that “talk” about the variables of the system Abstract states b assign each p ∈ P a truth value (can be represented as bitstrings) An abstract state b corresponds to the set of concrete states [b] = {s | s | = b} There is an abstract transition t = b → b′ iff ∃s ∈ [b] and ∃s′ ∈ [b′] such that s → s′ is a concrete transition
Pattern P
Set of predicates containing: All constraints that appear in guards or location invariants For each location: a location predicate
Example
left P walk right c! g? x := 0 x > 2 red Q yellow y ≤ 1 green c? y := 0 g! y ≥ 2
P = {P = left, P = walk, P = right, Q = red, Q = yellow, Q = green, x > 2, y ≤ 1, y ≥ 2}
Predicate Abstraction
Mapping Concrete to Abstract States
Let P = {p1, . . . pn} be a set of predicates. For every p ∈ P check if s | = p abstract state sα Looking up abstract states is straight-forward
Syntax-based Pattern Selection
Can induce large pattern databases (many predicates) This can be overcome by “splitting” the system into several independent parts Construct PDB for each of these parts and Combine (maximize or add) heuristic values
Local search in the pattern space
Given: the set of variables V used to define a system and a threshold for the maximum size of the PDB Start with pattern P = ∅ While |P| < threshold do
Select v ∈ V \ P such that P′ = P ∪ {v} is better than P′′ = P ∪ {w} for all w = v P = P′
∅ {v1} {vi} {vi, v1} {vi, vj} {vi, vn} {vn}
Question: how can we measure the quality of a pattern?
Estimating a PDB heuristic’s quality
A possible quality measurement
Average heuristic value ¯ h of the PDB heuristic h induced by the pattern Intuition: if h dominates h′, then ¯ h > ¯ h′
Problems
What if there are dead ends that h can detect? (how to cope with ∞?) For the evaluation the PDB has to be constructed (can be expensive)
Variable Abstraction
Symbolic state space S
Discrete part d Zone Z
s = d, Z
sα = dα, Zα Bucket: states with equal discrete part
dA, Z1 dA, Z2 dA, Z3
s′ = dα, Z′ with Z′ ∩ Zα = ∅
ZA Z1 Z2 Z3 ZA Z1 Z2 Z3
h(s) = min
s′∈Sα{distα(s′) | s′ ∩ sα = ∅}
So far, we have seen
Different abstractions Different approaches for pattern selection
So far, we have seen
Different abstractions Different approaches for pattern selection
But . . .
Experimental Setup
2.66 GHz Intel Xeon, memout at 4 GB Implemented either in Mcta or Uppaal/DMC
Single-tracked Line Segment (flawed version)
PLC 2 ES1 CS1 LS1 ES2 LS2 CS2
A∗ Search
explored states C1 C2 C3 C4 C5 C6 C7 C8 C9 103 104 105 106 107 108 runtime in s C1 C2 C3 C4 C5 C6 C7 C8 C9 0.1 1 10 100 103 104
Edelkamp et al. Based on plain graph distance Kupferschmid et al. Monotonicity abstraction Dr¨ ager et al. Iteratively “merges” two automata by abstracting their cross product Hoffmann et al. PDB heuristic: syntax-based pattern selection Qian et al. PDB heuristic: COI-based pattern selection, user interaction Kupferschmid et al. PDB heuristic: CE-based pattern selection
Greedy Search
explored states C1 C2 C3 C4 C5 C6 C7 C8 C9 103 104 105 106 107 108 runtime in s C1 C2 C3 C4 C5 C6 C7 C8 C9 0.1 1 10 100 103 104 error trace length C1 C2 C3 C4 C5 C6 C7 C8 C9 100 103 104 105 106 107
Edelkamp et al. Kupferschmid et al. (monotonicity abstraction) Dr¨ ager et al. Hoffmann et al. Qian et al. Kupferschmid et al. (CE-based pattern selection)
About this List
This list is meant to be focused, not comprehensive. Hence, it is a somewhat subjective mix of papers we consider important and relevant to the lecture’s topic. If a paper is not listed, there are many possible reasons:
We do not know it. We forgot it. We do not think it is (sufficiently) important. It overlaps considerably with another paper listed here. Its topic is not close enough to the focus of this lesson (e. g., papers on domain-dependent search).
◮ C. Han Yang and David L. Dill.
Validation with guided search of the state space. In Proc. Conference on Design Automation, pp. 599–604, 1998. First paper about MC + heuristic search
◮ Stefan Edelkamp, Alberto Lluch-Lafuente, and Stefan Leue.
Directed explicit model checking with HSF-SPIN. In Proc. SPIN 2001, pp. 57–79, 2001. Coined the term directed model checking
◮ Judea Pearl.
Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, 1984. Discusses the foundations of heuristic search
◮ Joseph C. Culberson and Jonathan Schaeffer.
Pattern databases. Computational Intelligence, 14(3):318–334, 1998. First paper on pattern database heuristics
◮ Stefan Edelkamp.
Symbolic pattern databases in heuristic search planning. In Proc. AIPS 2002, pp. 274–283, 2002. Uses BDDs to store pattern databases more compactly.
◮ Kairong Qian and Albert Nymeyer.
Guided invariant model checking based on abstraction and symbolic pattern databases. In Proc. TACAS 2004, pp. 497–511, 2004. COI-based pattern selection
◮ Stefan Edelkamp.
Automated creation of pattern database search heuristics. In Proc. MOCHART 2006, pp. 35–50, 2007. First search-based pattern selection method.
◮ J¨
Kupferschmid, and Andreas Podelski. Using predicate abstraction to generate heuristic functions in Uppaal. In Proc. MOCHART 2006, pp. 51–66, 2007. Uses predicate abstraction to generate PDB heuristics
◮ Sebastian Kupferschmid, J¨
Fast directed model checking via russian doll abstraction. In Proc. TACAS 2008, pp. 203–217, 2008. Introduces CE-based pattern selection
◮ Sebastian Kupferschmid, J¨
Gerd Behrmann. Adapting an AI planning heuristic for directed model checking. In Proc. SPIN 2006, pp. 35–52, 2006. Introduces the monotonicity abstraction (for model checking)
◮ Klaus Dr¨
ager, Bernd Finkbeiner, and Andreas Podelski. Directed model checking with distance-preserving abstractions. International Journal on Software Tools for Technology Transfer, 11(1):27–37, 2009. Introduces distance-preserving abstractions
◮ Martin Wehrle and Malte Helmert.
The causal graph revisited for directed model checking. In Proc. SAS 2009, pp. 86–101, 2009. Adapts the causal graph heuristic from AI planning
◮ Martin Wehrle, Sebastian Kupferschmid, and Andreas Podelski.
Transition-based directed model checking. In Proc. TACAS 2009, pp 186–200, 2009. General framework to accelerate heuristic search
◮ Sebastian Kupferschmid, Klaus Dr¨
ager, J¨
Finkbeiner, Henning Dierks, Andreas Podelski, and Gerd Behrmann. Uppaal/DMC – abstraction-based heuristics for directed model checking. In Proc. TACAS 2007, pp. 679–682, 2007. DMC extension of Uppaal
◮ Sebastian Kupferschmid, Martin Wehrle, Bernhard Nebel, and
Andreas Podelski. Faster than Uppaal? In Proc. CAV 2008, pp. 552–555, 2008. Open source directed model checker for timed automata