Differential Evolution
Download
Report
Transcript Differential Evolution
Differential
Evolution
Hossein Talebi
Hassan nikoo
1
2
Variations to Basic Differential
Evolution
Hybrid Differential Evolution Strategies
Gradient-Based Hybrid Differential Evolution
Evolutionary Algorithm-Based Hybrids
DE Reproduction Process used as Cross over
Operation of Simple GA
Ranked-Based Cross Over Operator for DE
Particle
Swarm Optimization Hybrids
Population-Based Differential Evolution
Self-Adaptive Differential Evolution
3
Variations to Basic Differential
Evolution
Differential
Problems
Evolution for Discrete-Valued
Angle Modulated Differential Evolution
Binary Differential Evolution
Constraint
Handling Approaches
Multi-Objective Optimization
Dynamic Environments
Applications
4
Gradient-Based Hybrid
Differential Evolution
acceleration
operator to improve
convergence speed – without decreasing
diversity
migration operator
The Acceleration Operator uses gradient
descent to adjust the best individual
toward obtaining a better position
5
Gradient-Based Hybrid
Differential Evolution(Cont.)
Using
Gradient Descent may results in
getting stuck in a local optima or
premature convergence
We can increase Population diversity by
Migration Operator
This operator spawns new individuals from
the best individual and replaces the
current population with these new
individuals
6
Gradient-Based Hybrid
Differential Evolution(Cont.)
The
Migration Operator is applied only
when diversity of population becomes too
small
7
Gradient-Based Hybrid
Differential Evolution(Cont.)
8
Using Stochastic Gradient Descent and
DE for Neural Networks Training
Stochastic
Gradient Descent
9
Evolutionary Algorithm-Based
Hybrids
Hrstka
and Kucerov´a used the DE
reproduction process as a crossover
operator in a simple GA
Chang and Chang used standard
mutation operators to increase DE
population diversity by adding noise to
the created trial vectors.
10
Evolutionary Algorithm-Based
Hybrids(Cont.)
Sarimveis
and Nikolakopoulos [758] use
rank-based selection to decide which
individuals will take part to calculate
difference vectors
11
Ranked-base
Mutation
12
Particle Swarm Optimization
Hybrids
Hendtlass
proposed that the DE
reproduction process be applied to the
particles in a PSO swarm at specified
intervals.
Kannan et al. apply DE to each particle
for a number of iterations and replaces
the best with particle
13
Particle Swarm Optimization
Hybrids(Cont.)
Another
approach is to change only
change best particle using
Where
sigma is general difference vector
14
Population-Based Differential
Evolution
Ali
and T¨orn proposed to use an auxiliary
population
For each offspring created, if the fitness of
the offspring is not better than the parent,
instead of discarding the offspring, it is
considered for inclusion in the auxiliary
Population
15
DETVSF (DE with Time Varying
Scale Factor)
During
the later stages it is important to
adjust the movements of trial solutions
finely so that they can explore the interior
of a relatively small space in which the
suspected global optimum lies
We can reduce the scale factor linearly
with time from a (predetermined)
maximum to a (predetermined) minimum
value
16
DETVSF (DE with Time Varying
Scale Factor)(Cont.)
17
Parameter Control in DE
Dynamic
Parameters
Self-Adaptive
18
Self-Adaptive Parameters
probability
adapted
Mu
of recombination be self –
is the average of successful
probablities
Abbass Proposed to use this formula :
19
Self-Adaptive Parameters(Cont.)
Omran
et al. propose a self-adaptive DE
strategy that makes use of this formula for
scale factor
For
mutation operator
20
Angle Modulated Differential
Evolution
Pampar´a
et al. proposed a DE algorithm
to evolve solutions to binary-valued
optimization problems, without having to
change the operation of the original DE
They
use a mapping between binaryvalued and continuous-valued space to
solve the problem in binary space
21
Angle Modulated Differential
Evolution(Cont.)
The
objective is to evolve, in the
abstracted continues space, a bitstring
generating function will be used in the
original space to produce bit-vector
solutions
‘a’,
’b’, ‘c’ and ‘d’ are continues space
problem parameter
22
Angle Modulated Differential
Evolution(Cont.)
‘a=0’
‘b=1’ ‘c=1’ ‘d=0’
23
Binary Differential Evolution
binDE
borrows concepts from the binary
particle swarm optimizer binPSO
binDE uses the floating-point DE
individuals to determine a probability for
each component
the corresponding bitstring solution will be
calculated as follow :
24
Binary Differential Evolution
25
Constraint Handling
Approaches
Penalty
methods
adding a function to penalize solutions that
violate constraints
Using F(x, t) = f(x, t) + λp(x, t) where λ is the
penalty coefficient and p is time
dependent penalty function
Converting
the constrained problem to
an unconstrained problem
26
Constraint Handling
Approaches(Cont.)
We
can convert constrained problem to
an unconstrained problem by defining
the Lagrangian for the constrained
problem
If
primal problem is convex then defining
dual problem and solving minmax
problem
27
Constraint Handling
Approaches(Cont.)
By
changing selection operator ,
infeasible solutions can be rejected and
we can use a method for repairing of the
infeasible solution
28
Constraint Handling
Approaches(Cont.)
Boundary constraints are easily enforced
by clamping offspring to remain within the
given boundaries
29
Multi-Objective Optimization
Converting
the problem into the
Weighted Aggregation Methods
30
Multi-Objective
Optimization(Cont.)
This
method intends to define an
aggregate objective function as a
weighted sum of the objectives
Usually
assumed that
31
Multi-Objective
Optimization(Cont.)
There
is no guarantee that different
solutions will be found
A niching strategy can be used to find
multiple solutions
It is difficult to get the best weight values,
ωk, since these are problem-dependent
32
Multi-Objective
Optimization(Cont.)
Vector
evaluated DE is a population
based method for MOO
If K objectives have to be optimized, K
sub-populations are used, where each
subpopulation optimizes one of the
objectives.
Sub-populations are organized in a ring
topology
The best individual of sub-population Ck
migrates to population Ck+1 to produce
the trial vectors for that population
33
Dynamic Environments
Assumptions
the number of peaks, n , to be found are
know and these peaks are evenly
distributed through the search space
Changes are small and gradual
DynDE
X
uses multiple populations, with
each population maintaining one of the
peaks
34
Dynamic Environments(Cont.)
At
each iteration, the best individuals of
each pair of sub-populations are
compared if these global best positions
are too close to one another, the subpopulation with the worst global best
solution is re-initialized
35
Dynamic Environments(Cont.)
The following diversity increasing strategies
Re-initialize the sub-populations
Use quantum individuals :Some of the individuals
are re-initialized to random points inside a ball
centered at the global best individual
Use Brownian individuals: Some positions are
initialized to random positions around global
best individual
Some individuals are simply added noise
36
Dynamic Environments(Cont.)
Initialization of Quantum Individuals
37
Applications
Mostly
applied to optimize functions
defined over continuous-valued
landscapes
Clustering
Controllers
Filter design
Image analysis
Integer-Programming
Model selection
NN training
38
Differential Evolution
References
1.
2.
3.
4.
Computational Intelligence, an introduction,2nd
edition, Andries Engelbercht, Wiley
Differential Evolution - A simple and efficient
adaptive scheme for global optimization over
continuous spaces, Rainer Storn,Kenneth Price,1995
Particle Swarm Optimization and Differential
Evolution Algorithms: Technical Analysis,
Applications and Hybridization Perspectives,
Swagatam Das1, Ajith Abraham2, and Amit
Konar1,Springer 2008.
Differential Evolution, homepage
http://www.icsi.berkeley.edu/~storn/code.html
39
Differential Evolution
Thanks For Your Attention
Any Question?