AI_Lecture_6
Download
Report
Transcript AI_Lecture_6
Lecture 6 – Local Search
Dr. Muhammad Adnan Hashmi
7 April 2016
1
In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution
State space = set of configurations
Find a configuration satisfying your constraints,
e.g., n-queens
In such cases, we can use local search
algorithms
Keep a single "current" state, and then shift
states, but don’t keep track of paths.
Use very limited memory
Find reasonable solutions in large state spaces.
7 April 2016
2
Local search algorithms are useful for solving
optimization problems
Find the best possible state according to a
given objective function
Optimize the number of products purchased
by an E-Commerce user
State: Action taken by the user plus the
resulting page-view
No track is kept of the path costs between the
states
All that is seen is whether the user is buying
more products (or not).
7 April 2016
3
7 April 2016
4
"Like climbing Everest in thick fog with
amnesia“
A loop that continually moves in the direction of
increasing value, i.e., uphill
Terminates when it reaches a peak where no
neighbor has a higher value
Fog with Amnesia: Doesn’t look ahead beyond
the immediate neighbors of the current state.
7 April 2016
5
7 April 2016
6
Pick a random point in the search space
2. Consider all the neighbors of the current
state
3. Choose the neighbor with the best quality
and move to that state
4. Repeat 2 thru 4 until all the neighboring
states are of lower quality
5. Return the current state as the solution
state.
7 April 2016
7
Greedy Local Search: grabs a good neighbor
state without thinking about where to go next
However, greedy algos do make good
progress generally towards the solution
Unfortunately, hill-climbing
Can get stuck in local maxima
Can be stuck by ridges (a series of local
maxima that occur close together)
Can be stuck by plateaux (a flat area in the
state space landscape)
Shoulder: if the flat area rises uphill later on
Flat local maximum: no uphill rise exists.
7 April 2016
8
Stochastic Hill Climbing: Chooses at random
from amongst the uphill moves, based on a
probability distribution
First-choice Hill Climbing: Implements
stochastic HC by generating successors
randomly until one is generated that is better
than the current state
Random-restart Hill Climbing: Selects a series of
initial nodes randomly until the solution is
found.
7 April 2016
9
Idea: escape local maxima by allowing
some "bad" moves but gradually
decrease their frequency
If Value[Next] is close to Value[Current], the assignment
is more likely to be accepted.
If the temperature is high, the exponent will be close to
zero, and the probability will be close to 1.
As the temperature approaches zero, the exponent
approaches -∞, and the probability approaches zero.
One can prove: If T decreases slowly
enough, then simulated annealing search
will find a global optimum with
probability approaching 1
Widely used in VLSI layout, airline
scheduling, etc.
7 April 2016
13