Informed Search
Philipp Koehn 25 February 2020
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
Informed Search Philipp Koehn 25 February 2020 Philipp Koehn - - PowerPoint PPT Presentation
Informed Search Philipp Koehn 25 February 2020 Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020 Heuristic 1 From Wikipedia: any approach to problem solving, learning, or discovery that employs a practical method not
Philipp Koehn 25 February 2020
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
1
From Wikipedia:
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
2
– hill-climbing – simulated annealing – genetic algorithms (briefly) – local search in continuous spaces (very briefly)
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
3
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
4
function TREE-SEARCH( problem,fringe) returns a solution, or failure fringe← INSERT(MAKE-NODE(INITIAL-STATE[problem]),fringe) loop do if fringe is empty then return failure node← REMOVE-FRONT(fringe) if GOAL-TEST[problem] applied to STATE(node) succeeds return node fringe← INSERTALL(EXPAND(node, problem),fringe)
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
5
– estimate of “desirability” ⇒ Expand most desirable unexpanded node
fringe is a queue sorted in decreasing order of desirability
– greedy search – A∗ search
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
6
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
7
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
8
= estimate of cost from n to the closest goal
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
9
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
10
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
11
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
12
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
13
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
14
Iasi → Neamt → Iasi → Neamt → Complete in finite space with repeated-state checking
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
15
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
16
– g(n) = cost so far to reach n – h(n) = estimated cost to goal from n – f(n) = estimated total cost of path through n to goal
– i.e., h(n) ≤ h∗(n) where h∗(n) is the true cost from n – also require h(n) ≥ 0, so h(G) = 0 for any goal G
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
17
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
18
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
19
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
20
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
21
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
22
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
23
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
24
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
25
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
26
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
27
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
28
f(G2) = g(G2) since h(G2) = 0 > g(G) since G2 is suboptimal ≥ f(n) since h is admissible
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
29
Yes, unless there are infinitely many nodes with f ≤ f(G)
A∗ expands all nodes with f(n) < C∗ A∗ expands some nodes with f(n) = C∗ A∗ expands no nodes with f(n) > C∗
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
30
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
31
– h1(n) = number of misplaced tiles – h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
32
– h1(n) = number of misplaced tiles – h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
33
→ h2 dominates h1 and is better for search
d = 14 IDS = 3,473,941 nodes A∗(h1) = 539 nodes A∗(h2) = 113 nodes d = 24 IDS ≈ 54,000,000,000 nodes A∗(h1) = 39,135 nodes A∗(h2) = 1,641 nodes
h(n) = max(ha(n),hb(n)) is also admissible and dominates ha, hb
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
34
⇒ h1(n) gives the shortest solution
⇒ h2(n) gives the shortest solution
the optimal solution cost of the real problem
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
35
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
36
– can be computed in O(n2) – is a lower bound on the shortest (open) tour
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
37
– incomplete and not always optimal
– complete and optimal – also optimally efficient (up to tie-breaks, for forward search)
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
38
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
39
the goal state itself is the solution
– find optimal configuration, e.g., TSP – find configuration satisfying constraints, e.g., timetable
→ keep a single “current” state, try to improve it
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
40
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
41
for very large n, e.g., n=1 million
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
42
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
43
function HILL-CLIMBING( problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current← MAKE-NODE(INITIAL-STATE[problem]) loop do neighbor←a highest-valued successor of current if VALUE[neighbor] ≤ VALUE[current] then return STATE[current] current←neighbor end
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
44
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
45
function SIMULATED-ANNEALING( problem, schedule) returns a solution state inputs: problem, a problem schedule, a mapping from time to “temperature” local variables: current, a node next, a node T, a “temperature” controlling prob. of downward steps current← MAKE-NODE(INITIAL-STATE[problem]) for t← 1 to ∞ do T←schedule[t] if T = 0 then return current next←a randomly selected successor of current ∆E← VALUE[next] – VALUE[current] if ∆E > 0 then current←next else current←next only with probability e∆ E/T
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
46
Boltzman distribution p(x) = αe
E(x) kT
⇒ always reach best state x∗ because e
E(x∗) kT /e E(x) kT = e E(x∗)−E(x) kT
≫ 1 for small T
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
47
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
48
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
49
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020
50
– exhaustive exploration of the search space – search with heuristics: a*
– hill-climbing – simulated annealing – local beam search (briefly) – genetic algorithms (briefly)
Philipp Koehn Artificial Intelligence: Informed Search 25 February 2020