Empirical Evaluation of Anytime Weighted AND/OR Best-First Search - - PowerPoint PPT Presentation
Empirical Evaluation of Anytime Weighted AND/OR Best-First Search - - PowerPoint PPT Presentation
Empirical Evaluation of Anytime Weighted AND/OR Best-First Search for MAP joint work with Radu Marinescu and Rina Dechter Summary Best first search (A*) is *best* combinatorial optimization algorithm, but is not anytime. (First solution
Summary
Best first search (A*) is *best* combinatorial optimization
algorithm, but is not anytime. (First solution at termination
- nly). It also requires excessive memory, and therefore rarely
used for Graphical models.
Depth-first branch-and-bound is less effective, but is
anytime and requires far less memory. It is the main search scheme for Graphical models.
Recent work in path-finding heuristic search showed that
weighted A* facilitates anytime scheme with accuracy guarantees.
Weighted A* finds approximate solution faster [Pohl 1970] Anytime A* finds an approximate solution and improves it over time [Hansen, Zhou 2007]. Other weighted anytime best first schemes [Likhachev et al. 2003, Richter et al. 2010, van den Berg et al. 2011]
Summary: our work
we adapt the weighted anytime best first search to
AND/OR search spaces. Specifically,
We investigate the extensions of AOBF to anytime
schemes and compare with the most effective anytime scheme to date: Breadth-Rotating AOBB (BRAOBB).
We also investigate the theoretical properties of the
search space explored by a weighted Best First search.
Ongoing work: investigation of the potential of
Weighted Branch and Bound
Background: Weighted A*
A* search
- admissible heuristic
- evaluation function:
f(n)=g(n)+h(n)
- guaranteed optimal
solution, cost C*
Weighted A* search
- non-admissible heuristic
- Evaluation function:
f(n)=g(n)+w∙h(n)
- Guaranteed w-optimal
solution, cost C ≤ w∙C*
s t s t
explored search space smaller (hopefully) explored search space
Background: AND/OR Best First (AOBF)
[Marinescu, Dechter 2009]
Time, space worst-case time complexity: AND/OR tree: AND/OR graph: Mini-bucket heuristics are known to be efficient for
AND/OR search [Kask&Dechter 1999, Marinescu&Dechter, 2009]
The i-bound parameter flexibly controls accuracy Extreme case: Bucket Elimination produces exact
heuristic [Dechter 1999]
) (
h
k N O
) (
* w
k m N O
Background: BRAOBB [Otten, Dechter 2011]
OR Branch-and-Bound is anytime. But AND/OR breaks anytime behavior
- f depth-first scheme:
First anytime solution delayed until
last subproblem starts processing
Breadth-Rotating AOBB:
Take turns processing subproblems. Solve each subproblem depth-first.
rotate
Our contribution
We adapted for the AND/OR search space 3 existing
anytime best first search schemes:
ARA* [Likhachev, M.; Gordon, G.; and Thrun, S. NIPS’03] ANA* [van den Berg, J.; Shah, R.; Huang, A.; and Goldberg, K. AAAI’11] Anytime AO* [Bonet, B, Geffner H. AAAI’12]
We proposed 3 original schemes:
a simple iterative anytime weighted Best First scheme
wAOBF
2 hybrid schemes that interleave Depth First search and
Best First search, using some ideas of ANA*
Anytime weighted AOBF (wAOBF)
Weighted AOBF:
Run ordinary AOBF with evaluation function f(n)=g(n)+w∙h(n)
Anytime weighted AOBF (wAOBF):
( is similar to Restarting weighted A* [Richter et al. 2010] )
Consider some starting weight w until w=1 or out of time
with current weight w run Weighted AOBF from scratch to completion output the solution found by Weighted A* Decrease w by fixed positive value δ
Theorem: The cost of each solution is bounded by current weight:
C ≤ w ∙ C*
[Pohl 1970] a lot of computation is repeated -> wasteful!
Anytime Repairing AOBF (wR-AOBF)
wAOBF repeats a lot of computations at each
iteration – wasteful!
Solution: reuse results of previous iterations
Keep track of the partially explored AND/OR graph After w decreases update evaluation for all nodes
whose f(n) changed with new weight, bottom-up from leaves to the root
Identify a new best partial solution tree and
continue search
Anytime Nonparametric AOBF (wN-AOBF)
how to choose weight? nobody knows, always done ad hoc wN-AOBF doesn’t use input parameters Main Idea:
assign to each node n a function e(n)=(C- g(n))/h(n). where C is the
current best solution
e(n) is equal to the maximum value of w such that f(n) ≤ C, so the
algorithm improves the solution as greedily as possible, automatically adapting the implicit value of w as the path quality increases.
Original ANA* at each step expanded the node in OPEN with
max e(n). However, AOBF-style algorithms don’t keep explicit OPEN list and the implementation is more involved
Anytime Stochastic AOBF (p-AOBF)
Main idea:
allow search to also expand nodes that may not
be on the optimal path
Specifically:
at each step it expands a tip node that does not
belong to the current best partial solution graph with a fixed probability (1 − p)
Parameter 0 ≤ p ≤1 allows a trade off between
exploration and exploitation of search space
does not provide optimality guarantees
Experiments 1: anytime stochastic AOBF
Algorithms:
wAOBF p-AOBF (p=0.2) p-AOBF (p=0.8) BRAOBB
3 datasets:
16 pedigree networks 17 binary grids 20 protein instances
time limit - 1 hour memory limit - 2 Gb
MPE task – higher values are better! Simple weight schedule substract(0.1): wi+1 = wi-0.1 w1=32
Experiments 1: anytime stochastic AOBF
Experiments 2: anytime nonparametric AOBF
Algorithms:
wAOBF p-AOBF (p=0.5) wN-AOBF wR-AOBF BRAOBB
3 datasets:
16 pedigree networks 17 binary grids 20 protein instances
time limit - 1 hour memory limit - 2 Gb
Simple weight schedule substract(0.1): wi+1 = wi-0.1 w1=32
Experiments 2: anytime nonparametric AOBF
Experiments 3: impact of the weight
Algorithms:
wAOBF wR-AOBF
3 datasets:
16 pedigree networks 17 binary grids 20 protein instances
time limit - 1 hour memory limit - 2 Gb
Simple weight schedule substract(0.1): wi+1 = wi-0.1 w1=4
Experiments 3: impact of the weight
Experiments 3: impact of the weight
Experiments 3: impact of the weight
Experiments 4: alternative weight schedules
Experiments 4: alternative weight schedules
Algorithms:
wAOBF wR-AOBF
3 datasets of hardest instances:
Pedigrees Grids WCSP
time limit - 1 hour memory limit - 2 Gb
5 weight schedules
Experiments 4: alternative weight schedules
Experiments 4: alternative weight schedules
Experiments 4: alternative weight schedules
Experiments 5: large memory limit
Algorithms:
wAOBF wR-AOBF BRAOBB
3 datasets of hardest instances:
Pedigrees Grids WCSP
time limit - 2 hour s memory limit - 120 Gb
MPE task – higher values are better! Two best weight schedule piecewise() revsqrt() (=sqrt())
Experiments 5: large memory limit: pedigrees
Experiments 5: large memory limit: pedigrees
Experiments 5: large memory limit: WCSPs
Experiments 5: large memory limit: WCSPs
Experiments 5: large memory limit: grids
Experiments 5: large memory limit: grids
ANA-RAGGED
Main idea:
Maintain best partial solution tree (as usual for AOBF) + feasible
partial solution tree (constructed based on the e(n)), e(n)=(C- g(n))/h(n)
For each node:
q(n) – lower bound on the cost of the best solution below n u(n) – upper bound; u(s) – upper bound on the overall optimal solution
Expand tip nodes of the feasible tree. If it has no tip nodes, expand tip
node of best partial solution tree
Each time a new node is expanded, best partial solution tree is
recalculated
If a new leaf found – recalculate u(n) for best partial solution tree If an improved solution is found, it is outputted and the feasible
partial solution tree is recalculated
ANA-RAGGED
ANA-RAGGED
Problem: After feasible partial solution tree can
no longer be expanded and before finding a new improved solution ANA-RAGGED performs BF search, which can take a lot of time, because we
- nly update u(n) when a new leaf is found -> until
then we can’t find new solution -> can’t re- compute feasible partial solution tree
-> long period between finding new solutions =
not so good anytime performance
ANA-SMOOTH
Main idea:
Run similar to ANA-RAGGED After each new node expansion:
Update u(n) values -> possible to find a new solution e-compute feasible partial solution tree -> possible to
- btain new tip nodes and coninue depth-first dive
Drawback:
More updates at every step
Benefit:
More chances to find a new solution -> smoother
anytime behaviour
Experiments 6: nonparametric hybrid schemes
Algorithms:
wAOBF wR-AOBF BRAOBB
3 datasets of hardest instances (no determinism):
Pedigrees Grids WCSP
time limit - 2 hour s memory limit - 120 Gb
Experiments 6: nonparametric hybrid schemes
Experiments 6: nonparametric hybrid schemes
Experiments 6: nonparametric hybrid schemes
Iterative weighted AOBB (iAOBB)
Main idea:
Very similar to wAOBF, but at each iteration run
not weighted AOBF, but weighted AOBB
Implementation issue:
The code was solving max-sum and not min-sum
problem (unlike all other code), thus the weight should be w<1, not w>1 -> discrepancy in the weighting schemes used
Preliminary experiments: iAOBB
Algorithms:
iAOBB BRAOBB
Datasets:
Pedigrees Grids WCSP proteins
time limit - 1 hour s memory limit - 2 Gb