Who am I? Jordan T. Thayer B.S. CS, RHIT, 2006 PhD Artificial - - PowerPoint PPT Presentation

who am i jordan t thayer
SMART_READER_LITE
LIVE PREVIEW

Who am I? Jordan T. Thayer B.S. CS, RHIT, 2006 PhD Artificial - - PowerPoint PPT Presentation

Who am I? Jordan T. Thayer B.S. CS, RHIT, 2006 PhD Artificial Intelligence, U. of New Hampshire, 2012 Advisor: Wheeler Ruml Thesis: Heuristic Search Under Time and Quality Bounds This stuff isnt my thesis area, but its


slide-1
SLIDE 1
slide-2
SLIDE 2

Who am I? – Jordan T. Thayer

  • B.S. CS, RHIT, 2006
  • PhD Artificial Intelligence, U. of New Hampshire, 2012
  • Advisor: Wheeler Ruml
  • Thesis: Heuristic Search Under Time and Quality Bounds
  • This stuff isn’t my thesis area, but it’s closely related
  • Since then
  • Logistics, Planning, Scheduling
  • Formal Verification
  • Static Analysis
  • Currently Sr. Software Engineer for SEP
slide-3
SLIDE 3
slide-4
SLIDE 4
slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7
slide-8
SLIDE 8
slide-9
SLIDE 9
slide-10
SLIDE 10
slide-11
SLIDE 11
  • 𝑜2
  • 𝑐𝑜
slide-12
SLIDE 12
slide-13
SLIDE 13
slide-14
SLIDE 14

Heuristic Search Can Be Costly

  • Checkers, the extreme case
  • Constant computation from 1989 to 2007 involving around 200 processors
  • VLSI & TSP, the hard case
  • Hours to days of compute time for moderate instances (2500-3000)
  • Scheduling
  • Minutes to days depending on problem size, constrainedness
  • Mercifully, CPU Time is not Wall Clock Time!
slide-15
SLIDE 15

The Simplest Approach

slide-16
SLIDE 16

Why you can’t do that

  • A problem of interest was a 115,000 city tsp
  • 115,000! Potential solutions
  • At the outside, maybe we prune 75% of those
  • Still ~ 1.5 × 10532039 nodes / expansions
  • How much do 10532039 lambda calls cost?
  • First million are free, 20 cents per million after that.
  • So, about $ 10532032
  • Current Worldwide GDP for 100,000 years is ~$1017
slide-17
SLIDE 17

What you can do

slide-18
SLIDE 18
slide-19
SLIDE 19

Depth First Search from AI:AMA

def depth_first_tree_search(problem): """Search the deepest nodes in the search tree first. Search through the successors of a problem to find a goal. The argument frontier should be an empty queue. Repeats infinitely in case of loops. [Figure 3.7]""" frontier = [Node(problem.initial)] # Stack while frontier: node = frontier.pop() if problem.goal_test(node.state): return node frontier.extend(node.expand(problem)) return None

slide-20
SLIDE 20

Depth First Search from AI:AMA

def depth_first_tree_search(problem): """Search the deepest nodes in the search tree first. Search through the successors of a problem to find a goal. The argument frontier should be an empty queue. Repeats infinitely in case of loops. [Figure 3.7]""" frontier = [Node(problem.initial)] # Stack while frontier: node = frontier.pop() if problem.goal_test(node.state): return node frontier.extend(node.expand(problem)) return None

slide-21
SLIDE 21

Depth First Search

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node frontier.extend(node.expand(problem)) return solution

slide-22
SLIDE 22
slide-23
SLIDE 23

The Pancake Domain

Given an unordered stack of pancakes, Order them using only a spatula and the ability to flip the stack

slide-24
SLIDE 24

Step 1

slide-25
SLIDE 25

Step 2

slide-26
SLIDE 26

Step 3

slide-27
SLIDE 27
slide-28
SLIDE 28
slide-29
SLIDE 29

Depth First Search for Pancakes

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node frontier.extend(node.expand(problem)) return solution

slide-30
SLIDE 30

It can (only) solve small instances

slide-31
SLIDE 31

But how does it scale? (Real Bad)

slide-32
SLIDE 32

Why?

Children Are Unsorted!

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node frontier.extend(node.expand(problem)) return solution

slide-33
SLIDE 33

Child Ordering is Critical

slide-34
SLIDE 34

Child Ordering is Critical

slide-35
SLIDE 35

Depth First Search: Child Ordering

Children are sorted (Heuristics go here!)

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node children = node.expand(problem) children.sort() frontier.extend(children) return solution

slide-36
SLIDE 36

Depth First Search: Child Ordering

Children are all generated at once

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node children = node.expand(problem) children.sort() frontier.extend(children) return solution

slide-37
SLIDE 37

Making All Kids At Once Is Bad!

slide-38
SLIDE 38

Making All Kids At Once Is Bad!

slide-39
SLIDE 39

Making All Kids At Once Is Bad!

slide-40
SLIDE 40

Depth First Search: Child Ordering

One child at a time

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node next = node.get_next_child(problem) # child ordering is now baked into next_child if not next is None: frontier.extend([next, node]) return solution

slide-41
SLIDE 41

How’s It Perform Now?

slide-42
SLIDE 42

Actually, the performance is complicated…

slide-43
SLIDE 43

Actually, the performance is complicated…

slide-44
SLIDE 44

DFS is an Anytime Search

slide-45
SLIDE 45

Actually, the performance is Complicated…

slide-46
SLIDE 46
slide-47
SLIDE 47

Travelling Salesman Problem

slide-48
SLIDE 48

Travelling Salesman Problem

slide-49
SLIDE 49

Travelling Salesman Problem

slide-50
SLIDE 50

This is what makes heuristic search so cool: I can solve a new problem, But I don’t have to change my approach!

slide-51
SLIDE 51

TSP Anytime Performance

slide-52
SLIDE 52

TSP Anytime Performance

slide-53
SLIDE 53
slide-54
SLIDE 54

Distributed Depth First Search

slide-55
SLIDE 55

Distributed Depth First Search

slide-56
SLIDE 56

Distributed Depth First Search

slide-57
SLIDE 57

Distributed Depth First Search

slide-58
SLIDE 58

Distributed Depth First Search

slide-59
SLIDE 59

DDFS Implementation

slide-60
SLIDE 60

DDFS Implementation

slide-61
SLIDE 61
slide-62
SLIDE 62

Distributed Depth First Search

slide-63
SLIDE 63

Distributed Depth First Search - Concept

slide-64
SLIDE 64

Distributed Depth First Search – Low Budget

slide-65
SLIDE 65

Distributed Depth First Search – Big Budget

slide-66
SLIDE 66
  • Thanks for your attention
  • What questions do you have?
slide-67
SLIDE 67

BACKUP SLIDES

  • Here be dragons, proofs, F#
slide-68
SLIDE 68

Wait, What’s Optimal?

  • Informally, it’s the best solution to the problem
  • Formally
  • Let goal(n) be the goal test applied to some node n
  • Let g(n) be the cost of arriving at some node n
  • Let 𝐻 be the (potentially) infinite graph induced by the tree search
  • Then 𝐻𝑝𝑏𝑚𝑡 =

𝑜 ∈ 𝐻 ∶ 𝑕𝑝𝑏𝑚 𝑜

  • Then 𝑃𝑞𝑢𝑗𝑛𝑏𝑚 =

𝑜 ∈ 𝐻𝑝𝑏𝑚𝑡 ∶ ∀𝑛 ∈ 𝐻𝑝𝑏𝑚𝑡 ∶ 𝑕 𝑜 ≤ 𝑕 𝑛

  • Which is just “its cost is no more than that of any other goal”
slide-69
SLIDE 69

Depth First Search: Convergence on Optimal

Pruning on incumbent solution

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node next = node.get_next_child(problem) # child ordering is now baked into next_child if not next is None: frontier.extend([next, node]) return solution

slide-70
SLIDE 70

Depth First Search: Convergence on Optimal

All nodes must improve

def depth_first_tree_search(problem): frontier = [Node(problem.initial)] # Stack solution = None while frontier: node = frontier.pop() if is_cycle(node, problem.are_equal): continue if is_better(solution, node): continue if problem.goal_test(node.state): solution = node next = node.get_next_child(problem) # child ordering is now baked into next_child if not next is None: frontier.extend([next, node]) return solution

Solutions must improve We exhaust the space of all solutions

slide-71
SLIDE 71

DDFS Implementation

slide-72
SLIDE 72

A More Exact Definition of Pancakes

State, Instance Definition

slide-73
SLIDE 73

A More Exact Definition of Pancakes

Action Definition

slide-74
SLIDE 74

A More Exact Definition of Pancakes

Goal Definition

slide-75
SLIDE 75

A More Exact Definition of Pancakes

Heuristics

slide-76
SLIDE 76

Domain Meets Search

Here’s how Pancakes fulfils that interface. Here’s us telling DFS to solve the abstracted problem.

slide-77
SLIDE 77

What’s f, why is it special?

  • 𝑕 𝑜 is the cost of reaching a node n
  • ℎ 𝑜 is a lower bound on the cost of an optimal solution starting at n
  • ℎ∗ 𝑜 is the true cost of an optimal solution starting at n
  • ℎ∗ 𝑜 = ℎ 𝑜 = 0 𝑗𝑔 𝑕𝑝𝑏𝑚 𝑜
  • 𝑔 𝑜 = 𝑕 𝑜 + ℎ 𝑜
  • 𝑔∗ 𝑜 = 𝑕 𝑜 + ℎ∗ 𝑜 is the true cost of an optimal solution
  • 𝑔 𝑜 < 𝑔∗ 𝑜 < 𝑔∗ 𝑡𝑝𝑚 = 𝑕 𝑡𝑝𝑚 and additionally,
  • 𝑔∗ 𝑜 ≥ 𝑔 𝑜 ≥ 𝑔∗ 𝑡𝑝𝑚 = 𝑕 𝑡𝑝𝑚
slide-78
SLIDE 78

TSP Problem Representation

slide-79
SLIDE 79

TSP Heuristics

One for child ordering One for pruning

slide-80
SLIDE 80

Search is domain agnostic!