Faster than Weighted A*: An Optimistic Approach to Bounded - - PowerPoint PPT Presentation

faster than weighted a an optimistic approach to bounded
SMART_READER_LITE
LIVE PREVIEW

Faster than Weighted A*: An Optimistic Approach to Bounded - - PowerPoint PPT Presentation

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search Jordan Thayer and Wheeler Ruml { jtd7, ruml } at cs.unh.edu Jordan Thayer (UNH) Optimistic Search 1 / 45 Motivation Finding optimal solutions is prohibitively


slide-1
SLIDE 1

Jordan Thayer (UNH) Optimistic Search – 1 / 45

Faster than Weighted A*: An Optimistic Approach to Bounded Suboptimal Search

Jordan Thayer and Wheeler Ruml {jtd7, ruml} at cs.unh.edu

slide-2
SLIDE 2

Motivation

Introduction ■ Motivation Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 2 / 45

Finding optimal solutions is prohibitively expensive.

Grid Pathfinding Nodes generated

200,000 100,000

Problem Size

1,000 800 600 400 200

A* Grid Pathfinding Solution Cost (relative to A*)

1.6 1.4 1.2 1.0

Problem Size

1,000 800 600 400 200

A*

slide-3
SLIDE 3

Motivation

Introduction ■ Motivation Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 3 / 45

Finding optimal solutions is prohibitively expensive.

Greedy solutions can be arbitrarily bad.

Grid Pathfinding Nodes generated

200,000 100,000

Problem Size

1,000 800 600 400 200

A* Greedy Four-way Grid Pathfinding (Unit cost) Solution Cost (relative to A*)

1.6 1.4 1.2 1.0

Problem Size

1,000 800 600 400 200

A* Greedy

slide-4
SLIDE 4

Motivation

Introduction ■ Motivation Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 4 / 45

Finding optimal solutions is prohibitively expensive.

Greedy solutions can be arbitrarily bad.

Weighted A* bounds suboptimality.

Grid Pathfinding Nodes generated

200,000 100,000

Problem Size

1,000 800 600 400 200

A* wA* Greedy Grid Pathfinding Solution Cost (relative to A*)

1.6 1.4 1.2 1.0

Problem Size

1,000 800 600 400 200

A* wA* Greedy

slide-5
SLIDE 5

Motivation

Introduction ■ Motivation Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 5 / 45

Finding optimal solutions is prohibitively expensive.

Greedy solutions can be arbitrarily bad.

Weighted A* bounds suboptimality.

Optimistic Search: faster search within the same bound.

Grid Pathfinding Nodes generated

200,000 100,000

Problem Size

1,000 800 600 400 200

A* wA* Optimistic Greedy Grid Pathfinding Solution Cost (relative to A*)

1.6 1.4 1.2 1.0

Problem Size

1,000 800 600 400 200

A* wA* Optimistic Greedy

slide-6
SLIDE 6

Algorithm Overview

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 6 / 45

slide-7
SLIDE 7

Talk Outline

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 7 / 45

Algorithm Overview Run weighted A∗ with a weight higher than the bound. Expand additional nodes to prove solution quality.

The Greedy Search Phase

The Cleanup Phase

Empirical Evaluation

Further Observations

slide-8
SLIDE 8

Previous Algorithms: A∗

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 8 / 45

A best first search expanding nodes in f order.

f(n) = g(n) + h(n) If h(n) is admissible, returns optimal solution.

slide-9
SLIDE 9

Previous Algorithms: Weighted A∗

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 9 / 45

A best first search expanding nodes in f′ order.

f′(n) = g(n) + w · h(n) Solution quality bounded by w for admissible h(n).

slide-10
SLIDE 10

Optimistic Search: The Basic Idea

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 10 / 45

1. Run weighted A∗ with a high weight. 2. Expand node with lowest f value after a solution is found. Continue until w · fmin > f(sol) This ’clean up’ guarantees solution quality.

slide-11
SLIDE 11

Optimistic Search: The Basic Idea

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 11 / 45

1. Run weighted A∗ with a high weight. 2. Expand node with lowest f value after a solution is found. Continue until w · fmin > f(sol) This ’clean up’ guarantees solution quality.

slide-12
SLIDE 12

Optimistic Search: The Basic Idea

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 12 / 45

1. Run weighted A∗ with a high weight. 2. Expand node with lowest f value after a solution is found. Continue until w · fmin > f(sol) This ’clean up’ guarantees solution quality.

slide-13
SLIDE 13

Optimistic Search: The Basic Idea

Introduction Algorithm Overview ■ Predecessors ■ Basic Idea 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 13 / 45

1. Run weighted A∗ with a high weight. 2. Expand node with lowest f value after a solution is found. Continue until w · fmin > f(sol) This ’clean up’ guarantees solution quality.

slide-14
SLIDE 14

1: Greedy Phase

Introduction Algorithm Overview 1: Greedy Phase ■ Weighted A∗ 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 14 / 45

slide-15
SLIDE 15

Talk Outline

Introduction Algorithm Overview 1: Greedy Phase ■ Weighted A∗ 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 15 / 45

Algorithm Overview

The Greedy Search Phase Weighted A∗ becomes faster as the bound grows. Weighted A∗ is often better than the bound.

The Cleanup Phase

Empirical Evaluation

Further Observations

slide-16
SLIDE 16

Large Bounds, Faster Solution

Introduction Algorithm Overview 1: Greedy Phase ■ Weighted A∗ 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 16 / 45

wA∗ returns solutions faster as the bound increases.

Pearl and Kim Hard Node Generations Relative to A*

0.9 0.6 0.3 0.0

Sub-optimality bound

1.2 1.1 1.0

wA*

slide-17
SLIDE 17

Weighted A∗ is often better than the bound

Introduction Algorithm Overview 1: Greedy Phase ■ Weighted A∗ 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 17 / 45

wA∗ returns solutions better than the bound.

Four-way Grid Pathfinding (Unit cost) Solution Cost (relative to A*)

3 2 1

Sub-optimality Bound

3 2 1

y=x wA*

slide-18
SLIDE 18

2: Cleanup Phase

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase ■ w-Admissibility Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 18 / 45

slide-19
SLIDE 19

Talk Outline

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase ■ w-Admissibility Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 19 / 45

Algorithm Overview

The Greedy Search Phase

The Cleanup Phase Expand additional nodes in f order. Quit when the solution is provably within the bound.

Empirical Evaluation

Further Observations

slide-20
SLIDE 20

Proving w-Admissibility

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase ■ w-Admissibility Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 20 / 45

p is the deepest node on an

  • ptimal path to opt.

fmin is the node with the smallest f value.

slide-21
SLIDE 21

Proving w-Admissibility

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase ■ w-Admissibility Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 20 / 45

p is the deepest node on an

  • ptimal path to opt.

fmin is the node with the smallest f value. f(p) ≤ f(opt) f(fmin) ≤ f(p) fmin provides a lower bound on solution cost. Determine fmin by priority queue sorted on f

slide-22
SLIDE 22

Proving w-Admissibility

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase ■ w-Admissibility Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 20 / 45

p is the deepest node on an

  • ptimal path to opt.

fmin is the node with the smallest f value. f(p) ≤ f(opt) f(fmin) ≤ f(p) fmin provides a lower bound on solution cost. Determine fmin by priority queue sorted on f Optimistic Search: Run a greedy search Expand fmin until w · fmin ≥ f(sol)

slide-23
SLIDE 23

Proving w-Admissibility

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase ■ w-Admissibility Empirical Evaluation Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 21 / 45

p is the deepest node on an

  • ptimal path to opt.

fmin is the node with the smallest f value. f(p) ≤ f(opt) f(fmin) ≤ f(p) fmin provides a lower bound on solution cost. Determine fmin by priority queue sorted on f Optimistic Search: Run a greedy search Expand fmin until w · fmin ≥ f(sol)

slide-24
SLIDE 24

Empirical Evaluation

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 22 / 45

slide-25
SLIDE 25

Talk Outline

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 23 / 45

Algorithm Overview

The Greedy Search

Guaranteeing solution quality

Empirical Evaluation Results in several domains.

Further Observations

slide-26
SLIDE 26

Empirical Evaluation

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 24 / 45

Sliding Tile Puzzles Korf’s 100 15-puzzle instances (add date)

Traveling Salesman Unit Square Pearl and Kim Hard (add date)

Grid world path finding Four-way and Eight-way Movement Unit and Life Cost Models 25%, 30%, 35%, 40%, 45% obstacles

Temporal Planning Blocksworld, Logistics, Rover, Satellite, Zenotravel See paper for additional plots.

slide-27
SLIDE 27

Performance of Optimistic Search

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 25 / 45

Korf’s 15 Puzzles: h = Manhattan Distance Node Generations Relative to IDA*

0.09 0.06 0.03 0.0

Sub-optimality bound

2.0 1.8 1.6 1.4 1.2 wA* Optimistic

slide-28
SLIDE 28

Performance of Optimistic Search

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 26 / 45

TSP: Pearl and Kim Hard Node Generations Relative to A*

0.9 0.6 0.3 0.0

Sub-optimality bound

1.2 1.1 1.0

wA* Optimistic

slide-29
SLIDE 29

Performance of Optimistic Search

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 27 / 45

Four-way Grid Pathfinding (Unit cost) Nodes generated (relative to A*)

0.9 0.6 0.3 0.0

Sub-optimality Bound

3 2 1

wA* Optimistic

slide-30
SLIDE 30

Performance of Optimistic Search

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation ■ Performance Further Observations Conclusion

Jordan Thayer (UNH) Optimistic Search – 28 / 45

logistics (problem 3) Nodes generated (relative to A*)

1.2 0.8 0.4 0.0

Sub-optimality Bound

3 2 1

wA* Optimistic Search

slide-31
SLIDE 31

Further Observations

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 29 / 45

slide-32
SLIDE 32

Talk Outline

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 30 / 45

Algorithm Overview

The Greedy Search

Guaranteeing solution quality

Empirical Evaluation

Further Observations Strict vs. Loose Expansion Policy Bounded Anytime Weighted A∗

slide-33
SLIDE 33

Expansion Policy

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 31 / 45

Strict Expansion Order:

Algorithms like wA∗, A∗

ǫ, Dynamically Weighted A∗

Any expanded node can be shown to be within the bound at the time of their expansion

Quality bound comes from this Loose Expansion Order:

Algorithms like Optimistic Search

No restriction on the nodes expanded initially.

Quality bound requires node expansion beyond the initial solution.

slide-34
SLIDE 34

Bounded Anytime Weighted A∗

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 32 / 45

Anytime Heuristic Search: Running weighted A∗ with a high weight Continue node expansions after a solution is found

slide-35
SLIDE 35

Bounded Anytime Weighted A∗

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 32 / 45

Anytime Heuristic Search: Running weighted A∗ with a high weight Continue node expansions after a solution is found

Bounded Anytime Weighted A∗: Running weighted A∗ with a high weight Continue node expansions after a solution is found Add a second priority queue allows us to converge on a bound instead of on optimal.

slide-36
SLIDE 36

Optimistic Search expansions

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 33 / 45

1. Run weighted A∗ with a high weight. 2. Expand node with lowest f value after a solution is found. Continue until w · fmin > f(sol) This ’clean up’ guarantees solution quality.

slide-37
SLIDE 37

Bounded Anytime Weighted A∗ Expansions

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 34 / 45

1. Run weighted A∗ with a high weight. 2. Expand node with lowest f′ value after a solution is found. Continue until w · fmin > f(sol) This ’clean up’ guarantees solution quality.

slide-38
SLIDE 38

Bounded Anytime Weighted A*

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 35 / 45

Korf’s 15 Puzzles Node Generations Relative to IDA*

0.09 0.06 0.03 0.0

Sub-optimality bound

2.0 1.8 1.6 1.4 1.2 BAwA* wA* Optimistic

slide-39
SLIDE 39

Bounded Anytime Weighted A*

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations ■ Expansion Policy ■ BAwA∗ Conclusion

Jordan Thayer (UNH) Optimistic Search – 36 / 45

Pearl and Kim Hard Node Generations Relative to A*

0.9 0.6 0.3 0.0

Sub-optimality bound

1.2 1.1 1.0

BAwA* wA* Optimistic

slide-40
SLIDE 40

Conclusion

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion ■ Conclusion ■ Advertising

Jordan Thayer (UNH) Optimistic Search – 37 / 45

Optimistic Search:

Simple to implement.

Performance is predictable.

Current results are good, tuning could help. Optimal greediness is still an open question.

Consistently better than Weighted A∗ If you currently use wA∗, you should use Optimistic Search.

slide-41
SLIDE 41

The University of New Hampshire

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion ■ Conclusion ■ Advertising

Jordan Thayer (UNH) Optimistic Search – 38 / 45

Tell your students to apply to grad school in CS at UNH!

friendly faculty

funding

individual attention

beautiful campus

low cost of living

easy access to Boston, White Mountains

strong in AI, infoviz, networking, systems, bioinformatics

slide-42
SLIDE 42

Additional Slides

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 39 / 45

slide-43
SLIDE 43

Weighted A∗ is often better than the bound

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 40 / 45

wA∗ returns solutions better than the bound.

Four-way Grid Pathfinding (Unit cost) Solution Cost (relative to A*)

3 2 1

Sub-optimality Bound

3 2 1

y=x wA*

slide-44
SLIDE 44

Weighted A∗ Respects a Bound

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 41 / 45

f(n) = g(n) + h(n) f′(n) = g(n) + w · h(n) g(sol) f′(sol) ≤ f′(p) g(p) + w · h(p) ≤ w · (g(p) + h(p)) w · f(p) ≤ w · f(opt) w · g(opt) Therefore, g(sol) ≤ w · g(opt)

slide-45
SLIDE 45

Weighted A∗ Respects the Bound and Then Some

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 42 / 45

f(n) = g(n) + h(n) f′(n) = g(n) + w · h(n) g(sol) f′(sol) ≤ f′(p) g(p) + w · h(p) ≤ w · (g(p) + h(p)) w · f(p) ≤ w · f(opt) w · g(opt) g(p) + w · h(p) ≤ w · g(p) + w · h(p)

slide-46
SLIDE 46

Duplicate Dropping can be Important

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 43 / 45

Four-way Grid Pathfinding (Unit cost) Nodes generated (relative to A*)

0.9 0.6 0.3 0.0

Sub-optimality Bound

3 2 1

wA* wA* dd

slide-47
SLIDE 47

Sometimes it isn’t

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 44 / 45

Korf’s 15 puzzles Node Generations Relative to IDA*

0.09 0.06 0.03 0.0

Sub-optimality bound

1.5 1.4 1.3 1.2 1.1

wA* dd wA*

slide-48
SLIDE 48

Sometimes it isn’t

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 44 / 45

Korf’s 15 puzzles Node Generations Relative to IDA*

0.09 0.06 0.03 0.0

Sub-optimality bound

1.5 1.4 1.3 1.2 1.1

wA* dd wA*

Duplicates can be delayed during the greedy search phase.

slide-49
SLIDE 49

Pseudo Code

Introduction Algorithm Overview 1: Greedy Phase 2: Cleanup Phase Empirical Evaluation Further Observations Conclusion Additional Slides ■ Loose Bounds ■ Duplicates ■ Pseudo Code

Jordan Thayer (UNH) Optimistic Search – 45 / 45

Optimistic Search(initial, bound)

  • 1. openf ← {initial}
  • 2. open

f

← {initial}

  • 3. incumbent ← ∞
  • 4. repeat until bound · f(first on openf ) ≥ f(incumbent):

5. if f(first on open

f

) < f(incumbent) then 6. n ← remove first on open

f

7. remove n from openf 8. else n ← remove first on openf 9. remove n from open

f

10. add n to closed 11. if n is a goal then 12. incumbent ← n 13. else for each child c of n 14. if c is duplicated in openf then 15. if c is better than the duplicate then 16. replace copies in openf and open

f

17. else if c is duplicated in closed then 18. if c is better than the duplicate then 19. add c to openf and open

f

20. else add c to openf and open

f