Local search algorithms
Chapter 4, Sections 3–4
Chapter 4, Sections 3–4 1Outline
♦ Hill-climbing ♦ Simulated annealing ♦ Genetic algorithms (briefly) ♦ Local search in continuous spaces (very briefly)
Chapter 4, Sections 3–4 2Iterative improvement algorithms
In many optimization problems, path is irrelevant; the goal state itself is the solution Then state space = set of “complete” configurations; find optimal configuration, e.g., TSP
- r, find configuration satisfying constraints, e.g., timetable
In such cases, can use iterative improvement algorithms; keep a single “current” state, try to improve it Constant space, suitable for online as well as offline search
Chapter 4, Sections 3–4 3Example: Travelling Salesperson Problem
Start with any complete tour, perform pairwise exchanges Variants of this approach get within 1% of optimal very quickly with thou- sands of cities
Chapter 4, Sections 3–4 4Example: n-queens
Put n queens on an n × n board with no two queens on the same row, column, or diagonal Move a queen to reduce number of conflicts
h = 5 h = 2 h = 0
Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 1million
Chapter 4, Sections 3–4 5Hill-climbing (or gradient ascent/descent)
“Like climbing Everest in thick fog with amnesia”
function Hill-Climbing(problem) returns a state that is a local maximum inputs: problem, a problem local variables: current, a node neighbor, a node current ← Make-Node(Initial-State[problem]) loop do neighbor ← a highest-valued successor of current if Value[neighbor] ≤ Value[current] then return State[current] current ← neighbor end
Chapter 4, Sections 3–4 6