Free Search - a comparative analysis

Download Report

Transcript Free Search - a comparative analysis

Differential Evolution
Rainer Storn and Kenneth Price
Outline
• Introduction
• Differential Evolution
• Comparison with Other Minimization Methods
– Particle Swarm Optimization
– Evolutionary Algorithms
– Real-number Genetic Algorithm
• Choice of DE’s Control Variables
• Conclusion
2
Introduction
• Global Optimization Problems are Important
over the Scientific Community.
– Representation
– Model Objective of Problems
– Deal with the Constraints
3
Introduction (cont.)
• When the cost function is nonlinear and nondifferentiable, (stochastic) direct search
approaches are the methods of choice.
–
–
–
–
–
Simulated Annealing
Nelder-Mead method (simplex method)ex2
Genetic Algorithms
Evolutionary Strategies
Particle Swarm Optimization
4
Introduction (cont.)
• Ingber and Price demand that a practical
minimization technique should fulfill five
requirements:
– Ability to Handle Non-differentiable, Nonlinear
and Multimodal Cost Functions.
– Parallelizability to Cope with Computation
Intensive Cost Functions.
– Ease of Use (example: Self-Organizing )
– Good Convergence Properties
5
Introduction (cont.)
• DE is a stochastic direct search method
without the gradient information. (1)
• Stochastic perturbation of the population
vectors can be done Independently. (2)
• Self-Organizing (i.e. operators) and Very Little
Inpute. (3)
• By Vast Experimental Results (4)
6
Differential Evolution
• Optimization heuristic with similarities to
Evolutionary Algorithms. It has
– population,
– variation operators, and
– selection
• Is based on adding the weighted difference
between two solutions to a third
• Proposed by Storn and Price in 1995
7
Differential Evolution - Mutation
• Mutation
NP: Pop. Size, G: Generation, r1 r2 r3: random
indexes != i, F: interval [0,2]
8
Differential Evolution - Mutation
9
Differential Evolution - Crossover
• Crossover: generate trivial vector (bin)
randb(j): uniform random number in [0,1]
CR: Crossover Constant in [0,1]. Predefine
rnbr(i): random indexes 1,…,d
10
Differential Evolution - Crossover
11
Differential Evolution - Selection
• To decide whether or not it should become a
member of generation G+1.
If ui,G < = xi,G , then xi,G+1= ui,G
Else xi,G+1= xi,G
12
13
Variant DE
• DE/a/b/c
– a: specifies the vector to be mutated.
rand/best/randtobest
– b: number of difference vectors taken for
perturbation. 1/2
– c: denotes the crossover schemes. bin/exp
– Example: DE/best/2/bin (Price, 1996)
14
Comparison with Other Minimization
Methods
• Evolutionary Algorithm (EA)
– Holland 1975; basically a real-valued GA
• Particle Swarm Optimization (PSO)
– Kennedy & Eberhart 1995
• Attractive and repulsive PSO
– Riget & Vesterstrøm 2002
• Differential Evolution (DE)
– Storn & Price 1995
15
Particle Swarm Optimization
• Optimization heuristic inspired by social
behavior of bird flocking or fish schooling
with similarities to Evolutionary Algorithms
• Is based on attraction of solutions to best found
solutions
• Proposed by Eberhart and Kennedy – also in
1995
16
Evolutionary Algorithm
• The EA used in this study resembles a real-valued
GA
– Cauchy mutation with annealed variance
– Arithmetic crossover
– Tournament selection
17
Experiments
• 34 benchmark problems in 2, 4, 30, and 100
dimensions
• 5,000,000 evaluations for 100 dimensional
problems - 500,000 for the rest
• Every experiment repeated 30 times
• Each algorithm was manually tuned – fixed
settings across problems
18
Benchmark Problems
• Test suite inspired by Yao and Liu
• Diverse set of problems
–
–
–
–
Uni-modal (De Jong, …)
Multi-modal (Rosenbrock, Rastrigin,…)
Correlated (and uncorrelated) variables
Noisy
19
Algorithmic Settings
• DE
– popsize = 100, CR = 0.9, F = 0.5
• PSO
– swarmsize = 25, vmax = 15%*
 f1 = 1.8, f2 = 1.8, w = 0.6
• arPSO
– dlow = 0.000005 and dhigh = 0.25
• EA
– popsize = 100, pm = 0.9, pc = 0.7
– tournamentsize = 2, elitesize = 1
20
Experimental Result (1) <= 30D
21
Experimental Result (2) = 100D
22
Experimental Result (3) Convergence
Rosenbrock, 30 dimensions
23
Choice of DE’s Control Variables
• NP = 10 × # of parameters
• F ~ 0.8
• DE is much more sensitive to the choice of F
than it is to the choice of CR .
• CR is more like a fine tuning element
24
Conclusion
• DE is a very simple and straightforward
strategy.
• The performance of DE is outstanding in
comparison to the other algorithms tested.
• Linkage?
25
Reference
• Rainer Storn and Kenneth Price, “Differential Evolution – A
Simple and Efficient Heuristic for Global Optimization over
Continuous Spaces”, Journal of Global Optimization, 11,341–
359, 1997.
• http://www.icsi.berkeley.edu/~storn/code.html
• Jakob VesterstramA and Ren6 Thomsen, ”Comparative Study
of Differential Evolution, Particle Swarm Optimization, and
Evolutionary Algorithms on Numerical Benchmark Problems”,
1980-1987, IEEE CEC, 2004.
26