Basic Problem-Solving 2.2 Best-first heuristic search Strategies - - PDF document

basic problem solving
SMART_READER_LITE
LIVE PREVIEW

Basic Problem-Solving 2.2 Best-first heuristic search Strategies - - PDF document

Chapter 2 2. Basic Problem-Solving Strategies 2.1 Basic search techniques Basic Problem-Solving 2.2 Best-first heuristic search Strategies 2.3 Problem decompositon and AND/OR graphs 2.4 Searching


slide-1
SLIDE 1

Chapter 2

Basic Problem-Solving Strategies

—————————————— Christian Jacob

jacob@cpsc.ucalgary.ca

Department of Computer Science University of Calgary

2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games 2.5 Efficient Searching (SMA*)

2.1 Basic Search Techniques

2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search

2.1 Basic Search Techniques

2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search

Analyzing and Exploring Problem Spaces

Problem space: a complete set of possible states, generated by exploring all possible steps, or moves, which may or may not lead from a given state or start state to a goal state. What should we know for a preliminary analysis of a search problem? What are the givens? Do we have all of them?

– Are the givens specific to a particular situation? – Can we generalize? – Is there a notation that represents the givens and other states succinctly?

slide-2
SLIDE 2

What should we know for a preliminary analysis of a search problem? (cont.) What is the goal?

– Is there a single goal, or are there several? – If there is a single goal, can it be split into pieces? – If there are several goals or subgoals, are they independent or are they connected? – Are there any constraints on developing a solution?

Fox-Goose-Corn Problem

State Space Graph

P F G C / F G C / P G C / P F F C / P G F G / P C P G C / F P F C / G P F G / C C / P F G G / P F C F / P G C P C / F G P G / F C P F / G C / P F G C

Components of a State Space Graph

– Start: description with which to label the start node – Operators: functions that transform from

  • ne state to another, within the constraints
  • f the search problem

– Goal condition: state description(s) that correspond(s) to goal state(s)

Blocks Rearrangement Problem

[Bratko, 2001]

Blocks Rearrangement Problem: State Space Graph

[Bratko, 2001]

Towers of Hanoi

A B C

slide-3
SLIDE 3

Towers of Hanoi:

Problem Decomposition

A B C A B C A B C A B C move tower with n-1 disks move tower with n-1 disks m

  • v

e d i s k

Eight-Queens Problem

8 queens on a chess board No queen attacked States: any arrangement of 0 to 8 queens on the board. Operators: add a queen to any square 96 solutions, 12 (wt. symmetry)

Travelling Salesperson Problem

www.cacr.caltech.edu/~manu/tsp.html

Eight Puzzle Problem

Start Goal Start 1: 4 steps Start 2: 5 steps Start 3: 18 steps

Eight Puzzle Problem

[Bratko, 2001]

Chess

[Newborn, 1997] [Kurzweil, 1990]

Anatoly Karpow and Gary Kasparow, 1986

slide-4
SLIDE 4

Rubik’s Cube 2.1 Basic Search Techniques

2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search

2.1.2 Depth-First Search

[Bratko, 2001]

Depth-First Search: Eight-Puzzle

[Nilsson, 1998]

Depth-First Search in Cyclic Graphs

Add cycle-detection!

[Bratko, 2001]

Depth First Search Evaluation

Good:

Since we don’t expand all nodes at a level, space complexity is modest. For branching factor b and depth m, we require bm number of nodes to be stored in memory. However, the worst case is still O(bm).

Bad:

  • If you have deep search

trees (or infinite – which is quite possible), DFS may end up running off to infinity and may not be able to recover.

  • Thus DFS is neither
  • ptimal nor complete
slide-5
SLIDE 5

Depth-Limited Search

Modifies DFS to avoid its pitfalls.

Say that within a given area, we had to find the shortest path to visit 10 cities. If we start at one of the cities, then there are at least 9 other cities to visit. So 9 is the limit we impose.

Since we impose a limit, there are little changes from DFS — with the exception that we will avoid searching an infinite path. DLS is complete if the limit we impose is greater than or equal to the depth of our solution.

Depth-Limited Search:

Space and Time Complexity DLS is O(bl) in time, where l is the limit we impose. Space complexity is bl. Not optimal

2.1 Basic Search Techniques

2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search

2.1.3 Iterative Deepening Search

Depth bound = 1 2 3 4

[Nilsson, 1998]

Iterative Deepening Search:

Evaluation

We look at the bottom most nodes once. We look at the level above that twice, … and the level above that thrice, … and so on, up to the root. Therefore, we get: (d+1)1 + (d)b + (d-1)b2 + … + 3bd-2 + 2bd-1 + 1bd Like DFS, IDS is still O(bd) while space complexity is bd.

2.1 Basic Search Techniques

2.1.1 Introductory Concepts and Examples 2.1.2 Depth-First or Backtracking Search 2.1.3 Iterative Deepening Search 2.1.4 Breadth-First Search

slide-6
SLIDE 6

2.1.4 Breadth-First Search

[Bratko, 2001]

Breadth-First Search: Eight-Puzzle

[Nilsson, 1998]

Breadth First Search

Time and space complexity :

  • If we look at how BFS expands from the root we

see that it first expands on a fixed number of nodes, say b.

  • On the second level we expand b2 nodes.
  • On the third level we expand b3nodes.
  • And so on, until it reaches bd for some depth d.

1+ b + b2 + b3 + . . . . + bd , which is O(bd) Since all leaf nodes need to be stored in memory, space complexity is the same as time complexity.

Bi-directional Search

Simultaneously search from the start node and from the goal(s). The search ends somewhere in the middle when the two touch each other. Time & space complexity: O(2bd/2) = O(bd/2) It is complete and optimal.

Bi-directional Search

Problems:

  • Do we know where the goal is?
  • What if there is more than one possible

goal state?

  • We may be able to apply a multiple state

search but this sounds a lot easier said than

  • done. Example: How many checkmate

states are there in chess?

  • We may utilize many different methods of

search, but which one is the choice?

Uniform Cost Search

Uniform Cost search is a modification of BFS. BFS returns a solution, but it may not be

  • ptimal.

UCS takes into account the cost of moving from one node to the next.

slide-7
SLIDE 7

Uniform Cost Search (Greedy Search):

Example

[Russel & Norvig, 1995]

Categories of Search

Uninformed Search

We can distinguish the goal state(s) from the non- goal state. The path and cost to find the goal is unknown. Also known as blind search.

Informed Search

We know something about the nature of our path that might increase the effectiveness of our search. Generally superior to uninformed search.

Uninformed Search Strategies

We covered these uninformed strategies :

  • Depth First Search
  • Depth Limited Search
  • Iterative Deepening Search
  • Breadth First Search
  • Bidirectional Search
  • Uniform Cost Search

2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games 2.5 Efficient Searching (SMA*)

2.2 Best-First Heuristic Search

2.2.1 Greedy Search 2.2.2 Best-First Heuristic Search (A*)

– Routing Problem – Best-First Search for Scheduling

2.2.1 Greedy Search

with Straight-Line Distance Heuristic

hSLD(n) = straight-line distance between n and the goal location

75 118 111 140 80 97 99 101 211 A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

slide-8
SLIDE 8

75 118 111 140 80 97 99 101 211 A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

A E B C

h=253 329 374

A E C B A F G

h=253

A E F Total distance = 253 + 178 = 431

h = 366 h = 178 h = 193 h = 253 h = 178

A A F B C E G I E A E I F

h = 253 h =366 h=178 h=193 h = 253 h = 0 h = 178 h = 0 h = 253

Total distance =253 + 178 + 0 =431

Optimality?

Cost(A — E — F — I )= 431 Cost(A — E — G — H — I) = 418

vs.

75 118 111 140 80 97 99 101 211 A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

Completeness

Greedy Search is incomplete Worst-case time complexity O(bm)

Straight-line distance

A B C D

h(n)

5 7 6 A D B C

Target node

Starting node

2.2.2 Best-First Search Heuristics

f(n) = g(n) + h(n) heuristic estimator g h

[Bratko, 2001]

slide-9
SLIDE 9

Best-First Heuristic Search

Process 1 Process 2

Activate-Deactivate Mechanism:

Routing Example

A E C B

f = 75 +374 =449 f = 118+329 =447 f = 140 + 253 = 393

f(n) = g(n) + h(n)

75 118 111 140 80 97 99 101 211

A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

Routing Example

A E C B A F G

f = 140+253 = 393 f = 0 + 366 = 366

f = 280+366 = 646 f =239+178 =417 f = 220+193 = 413

75 118 111 140 80 97 99 101 211

A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

Routing Example

A A F B C E G H E

f = 140 + 253 = 393 f =220+193 =413 f = 317+98 = 415 f = 300+253 = 553

f = 0 + 366 = 366

75 118 111 140 80 97 99 101 211

A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

Routing Example

75 118 111 140 80 97 99 101 211

A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

A A F B C E G H E f = 140 + 253 = 393 f =220+193 =413 f = 300+253 = 553 f =0 + 366= 366 f = 317+98 = 415 I

f = 418+0 = 418

Optimality: Yes!

Cost(A — E — F — I )= 431 Cost(A — E — G — H — I) = 418

vs.

75 118 111 140 80 97 99 101 211 A B C D E F G H I

State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I

slide-10
SLIDE 10

Best-First Search for Scheduling

Precedence

Solution 1 Solution 2

[Bratko, 2001]

2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games 2.5 Efficient Searching (SMA*)

2.3 Problem Decomposition and AND/OR Graphs

[Bratko, 2001] [Bratko, 2001]

AND/OR Graphs and Solution Trees

Solution Tree T:

– The original problem, P, is the root node of T. – If P is an OR node then exactly one of its successors (in the AND/OR graph), together with its own solution tree, is in T. – If P is an AND node then all of its successors (in the AND/OR graph), together with their solution trees, are in T.

OR AND

[Bratko, 2001]

Solution Trees: Example

[Bratko, 2001]

Route Problem: Cheepest Solution Tree

[Bratko, 2001]

slide-11
SLIDE 11

2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games

Two-Person Game AND/OR Graph

[Bratko, 2001]

Tic Tac Toe Example

[Russel & Norvig, 1995]

2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games

  • MiniMax Strategy
  • Pruning

Minimax Strategy for Game Playing

3 2 2 3 3 12 8 2 4 6 14 5 2

[Russel & Norvig, 1995]

Minimax Strategy

This algorithm is only good for games with a low branching factor. In general, the complexity is: O(bd) where: b = average branching factor

d = number of plies

Chess on average has:

  • 35 branches and
  • usually at least 100 moves
  • so the game space is: 35100

Is this a realistic game space to search?

(1000 positions/sec., 150 seconds/move --> 4 ply look-ahead)

slide-12
SLIDE 12

Minimax Chess Tree Minimax Search in Chess How to Judge Quality

– Evaluation functions must agree with the utility functions on the terminal states (evaluation of board configuration). – They must not take too long to evaluate ( trade off between accuracy and time cost).

  • They should reflect the actual chance of winning.

Use the probability of winning as the value to return.

  • One has to design a heuristic value for any given position of

any object in the game.

Examples: Chess and Othello

– Weighted Linear Functions

  • w1f1 + w2f2 + … wnfn

wi: weight of piece i fi: features of a particular position

  • Chess : Material Value – each piece on the board is

associated with a value ( Pawn = 100, Knights = 3 …etc)

  • Othello : Values given to the number of certain colors
  • n the board and the number of colors that will be

converted

Chess Tree 2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games

  • MiniMax Strategy
  • Pruning
slide-13
SLIDE 13

Pruning the Search Tree: Speedup Search

What is pruning?

– The process of eliminating a branch of the search tree from consideration without examining it.

Why prune?

– To eliminate searching nodes that are potentially unreachable. – To speedup the search process. Chess: 1000 positions/sec., 150 seconds/move --> 4 ply look-ahead

Alpha-beta pruning returns the same choice as minimax cutoff decisions, but examines fewer nodes.

Pruning : Speedup Search

3 2 2 3 3 12 8 2 4 6 14 5 2

A2 is worth at most 2 to MAX

[Russel & Norvig, 1995]

Alpha-Beta Pruning

Gets its name from the two variables that are passed along during the search which restrict the set of possible solutions. Alpha: the value of the best choice (highest value) so far along the path for MAX. Beta: the value of the best choice (lowest value) so far along the path for MIN.

Alpha-Beta Pruning Example

α = − ∞ β = + ∞

Alpha-Beta Pruning Example

MIN MAX

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = + ∞

Alpha-Beta Pruning Example

MIN MAX

slide-14
SLIDE 14

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = + ∞

MIN MAX

Utility: 8

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = 8 β = + ∞

MIN MAX

Utility: 8

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = 8 α = 8 β = + ∞

MIN MAX

Utility: 8

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = 8 α = 8 α = − ∞ β = + ∞ β = 8

MIN MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = 8 α = 8 α = 3 β = + ∞ β = 8

MIN MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = − ∞ β = + ∞ α = − ∞ β = 3 α = 8 α = 3 β = + ∞ β = 8

MIN MAX

Utility: 8 3

slide-15
SLIDE 15

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = 3 β = + ∞ α = − ∞ β = 3 α = 8 α = 3 β = + ∞ β = 8

MIN MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = 3 β = + ∞ α = − ∞ α = 3 β = 3 β = + ∞ α = 8 α = 3 β = + ∞ β = 8

MIN MAX

Utility: 8 3

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = 3 β = + ∞ α = − ∞ α = 3 β = 3 β = + ∞ α = 8 α = 3 α = 3 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = 3 β = + ∞ α = − ∞ α = 3 β = 3 β = + ∞ α = 8 α = 3 α = 2 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = 3 β = + ∞ α = − ∞ α = 3 β = 3 β = 2 α = 8 α = 3 α = 2 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞ β = + ∞ α = 3 β = + ∞ α = − ∞ α = 3 β = 3 β = 2 α = 8 α = 3 α = 2 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

slide-16
SLIDE 16

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 β = + ∞ α = − ∞ α = 3 β = 3 β = 2 α = 8 α = 3 α = 2 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = − ∞ β = + ∞ β = 3 α = − J α = 3 β = 3 β = 2 α = 8 α = 3 α = 2 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = − ∞ β = + ∞ β = 3 α = − ∞ α = 3 α = − ∞ β = 3 β = 2 β = 3 α = 8 α = 3 α = 2 β = + ∞ β = 8 β = + ∞

MIN MAX

Utility: 8 3 2

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = − ∞ β = + ∞ β = 3 α = − ∞ α = 3 α = − ∞ β = 3 β = 2 β = 3 α = 8 α = 3 α = 2 α = − ∞ β = + ∞ β = 8 β = + ∞ β = 3

MIN MAX

Utility: 8 3 2 14

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = − ∞ β = + ∞ β = 3 α = − ∞ α = 3 α = − ∞ β = 3 β = 2 β = 3 α = 8 α = 3 α = 2 α = 14 β = + ∞ β = 8 β = + ∞ β = 3

MIN MAX

Utility: 8 3 2 14

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = − ∞ β = + ∞ β = 3 α = − ∞ α = 3 α = − ∞ β = 3 β = 2 β = 3 α = 8 α = 3 α = 2 α = 14 β = + ∞ β = 8 β = + ∞ β = 3

MIN MAX

Utility: 8 3 2 14

slide-17
SLIDE 17

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = 3 β = + ∞ β = 3 α = − ∞ α = 3 α = − ∞ β = 3 β = 2 β = 3 α = 8 α = 3 α = 2 α = 14 β = + ∞ β = 8 β = + ∞ β = 3

MIN MAX

Utility: 8 3 2 14

Alpha-Beta Pruning Example

α = − ∞ β = 3 α = 3 α = 3 β = + ∞ β = 3 α = − ∞ α = 3 α = − ∞ β = 3 β = 2 β = 3 α = 8 α = 3 α = 2 α = 14 β = + ∞ β = 8 β = + ∞ β = 3

MIN MAX

Utility: 8 3 2 14

Alpha-Beta Search in Chess 2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games

  • MiniMax Strategy
  • Pruning
  • Chance

Give Chance and Chance

[Russel & Norvig, 1995]

Chance Nodes: Example

Max Chance Min Max Chance

2 3 1 4 3 2 12 3 5 2 7 5 61 2 4 3 3 3 5 7 6 2 3.6 3.0 5.8 4.4 3.0 4.4 3.56

.6 .6 .6 .6 .4 .4 .4 .4 .6 .4

slide-18
SLIDE 18

2. Basic Problem-Solving Strategies

2.1 Basic search techniques 2.2 Best-first heuristic search 2.3 Problem decompositon and AND/OR graphs 2.4 Searching in Games 2.5 Efficient Searching (SMA*)

SMA* — Simplified Memory-Bounded A*

– SMA* will utilize whatever memory is made available to it – SMA* avoids repeated states as far as its memory allows – SMA* is complete if the available memory is sufficient to store the shallowest solution path – SMA* is optimal if enough memory is available to store the shallowest optimal solution path

SMA* in Action

A

12

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Objective: Find the lowest - cost goal node.

A: root node D,F,I,J: goal nodes

Constraint: Maximum of 3 nodes

SMA* in Action

A

12

A B

12 15

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Objective: Find the lowest - cost goal node.

A: root node D,F,I,J: goal nodes

Constraint: Maximum of 3 nodes

SMA* in Action

A

12

A B

12 15

  • A

B

13 15

G

13

  • Memory is full.
  • Update f-cost of A.
  • Drop the higher f-cost leaf (B).
  • Memorize B.

12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Objective: Find the lowest - cost goal node.

A: root node D,F,I,J: goal nodes

Constraint: Maximum of 3 nodes

SMA* in Action

A

12

A B

12 15

A G

13 (15) 13

H

  • A

B

13 15

G

13 (Infinite)

  • Memory is full.
  • Update f-cost of A.
  • Drop the higher f-cost leaf (B).
  • Memorize B.
  • Expand G.
  • Memory is full.
  • h not a goal node.
  • Mark h as infinite.

12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Objective: Find the lowest - cost goal node.

A: root node D,F,I,J: goal nodes

Constraint: Maximum of 3 nodes

slide-19
SLIDE 19

SMA* in Action

A G 15 (15) 24 (infinite)

Objective: Find the lowest - cost goal node.

I 24

  • Drop H and add I.
  • G memorizes H.
  • Update f-cost for G.
  • Update f-cost for A.

A: root node D,F,I,J: goal nodes

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Constraint: Maximum of 3 nodes

SMA* in Action

A G G 15 (15) 24 (infinite) 24 A B 15 15

Objective: Find the lowest - cost goal node.

I 24

  • Drop H and add I.
  • G memorizes H.
  • Update f-cost for G.
  • Update f-cost for A.
  • I is a goal node , but

may not be the best solution.

  • So B is worthwhile to try

for the second time

A: root node D,F,I,J: goal nodes

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Constraint: Maximum of 3 nodes

SMA* in Action

A G G 15 (15) 24 (infinite) 24 A B 15 15 A B 15 (24) 15

Objective: Find the lowest - cost goal node.

I 24 C (Infinite)

  • Drop H and add I.
  • G memorizes H.
  • Update f-cost for G.
  • Update f-cost for A.
  • I is a goal node , but

may not be the best solution.

  • So B is worthwhile to try

for the second time

  • Drop G and add C.
  • A memorizes G.
  • C is not a goal node.
  • C marked as infinite.

A: root node D,F,I,J: goal nodes

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Constraint: Maximum of 3 nodes

SMA* in Action

A G G 15 (15) 24 (infinite) 24 A B 15 15 A B 15 (24) 15 A B 20 (24) 20 (infinite) D 20

Objective: Find the lowest - cost goal node.

I 24 C (Infinite)

  • Drop H and add I.
  • G memorizes H.
  • Update f-cost for G.
  • Update f-cost for A.
  • I is a goal node , but

may not be the best solution.

  • So B is worthwhile to try

for the second time

  • Drop G and add C.
  • A memorizes G.
  • C is not a goal node.
  • C marked as infinite.
  • Drop C and add D.
  • B memorizes C.
  • D is a goal node.
  • Terminate.

A: root node D,F,I,J: goal nodes

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Constraint: Maximum of 3 nodes

SMA* in Action

A G G 15 (15) 24 (infinite) 24 A B 15 15 A B 15 (24) 15 A B 20 (24) 20 (infinite) D 20

Objective: Find the lowest - cost goal node.

I 24 C (Infinite)

  • Drop H and add I.
  • G memorizes H.
  • Update f-cost for G.
  • Update f-cost for A.
  • I is a goal node , but

may not be the best solution.

  • So B is worthwhile to try

for the second time

  • Drop G and add C.
  • A memorizes G.
  • C is not a goal node.
  • C marked as infinite.
  • Drop C and add D.
  • B memorizes C.
  • D is a goal node.
  • Terminate.

A: root node D,F,I,J: goal nodes

  • 12

A B G

15

D C E F H I J K

25 20 35 30 18 24 19 29

Label: current f-cost

13

Constraint: Maximum of 3 nodes

What about this node?

References

  • Bratko, I. (2001). PROLOG Programming for Artificial
  • Intelligence. New York, Addison-Wesley.
  • Kurzweil, R. (1990). The Age of Intelligent Machines.

Cambridge, MA, MIT Press.

  • Newborn, M. (1997). Kasparov versus Deep Blue.

Berlin, Springer-Verlag.

  • Nilsson, N. (1998). Artificial Intelligence — A New
  • Synthesis. San Francisco, CA, Morgan Kaufmann.
  • Russel, S., and Norvig, P. (1995). Artificial Intelligence

— A Modern Approach. Upper Saddle River, NJ, Prentice Hall.