Particle Swarm Optimization (PSO)

Download Report

Transcript Particle Swarm Optimization (PSO)

Particle Swarm Optimization
(PSO)
Mansour Nejati
Introduction : Swarm Intelligence
2



Study of collective behavior in decentralized, selforganized systems.
Originated from the study of colonies, or swarms of
social organisms.
Collective intelligence arises from interactions.
Introduction
3

Particle Swarm Optimization:
 Introduced by
Kennedy & Eberhart 1995
 Inspired by social behavior of birds and shoals of fish
 Swarm intelligence-based optimization
 Nondeterministic
 Population-based optimization
 Performance comparable to Genetic algorithms
Particle Swarm Optimization
4
Swarm : a set of particles (S)
 Particle: a potential solution

 Position,
 Velocity

,
Each particle maintains
 Individual

best position:
Swarm maintains its global best:
PSO Algorithm
5

Basic algorithm of PSO:
1.
2.
3.
4.
5.
Initialize the swarm from the solution space
Evaluate fitness of each particle
Update individual and global bests
Update velocity and position of each particle
Go to step 2, and repeat until termination condition
PSO Algorithm (cont.)
6

Original velocity update equation:
Inertia Cognitive Component
 with

: acceleration constant
Social Component
PSO Algorithm (cont.)
7

Original velocity update equation:
Inertia Cognitive Component
 with


: acceleration constant
Position Update:
Social Component
PSO Algorithm - Parameters
8

Acceleration constant




Small values limit the movement of the particles
Large values : tendency to explode toward infinity
In general
Maximum velocity

Velocity is a stochastic variable => uncontrolled trajectory
Simple 1D Example
9
Initialize swarm and evaluate fitness (t=0)
3
2.5
gbest
2
1.5
1
0.5
0
0
1
2
3
4
5
Simple 1D Example
10
Update velocity and position (t=1)
3
2.5
gbest
2
1.5
1
0.5
0
0
1
2
3
4
5
Simple 1D Example
11
Evaluate fitness
Update personal and global best (t=2)
3
2.5
gbest
2
1.5
1
0.5
0
0
1
2
3
4
5
Simple 1D Example
12
Evaluate fitness
Update personal and global best (t=2)
3
2.5
gbest
2
1.5
1
0.5
0
0
1
2
3
4
5
Simple 1D Example
13
Update velocity and position (t=2)
3
gbest
2.5
2
Inertia
Personal
Social
Total
1.5
1
0.5
0
0
1
2
3
4
5
Rate of Convergence Improvement
14

Inertia weight:
Scaling the previous velocity
 Control search behavior

values  exploration
 Low values  exploitation
 High
PSO with Inertia weight
15

can be decreased over time:
 Linear
[0.9 to 0.4]
 Nonlinear

main disadvantage:
 once
the inertia weight is decreased, the swarm loses
its ability to search new areas (can not recover its
exploration mode).
Rate of Convergence Improvement
16

Constriction Factor:

Canonical PSO
Typically
,
 Can converge without using Vmax (velocity clamping)
 Improve the convergence by damping the oscillations

Swarm Topologies
17

Two general types of neighborhoods:
 Global
best (gbest) : fully connected network
 Local best (lbest) : according to a topology
gbest
Ring
Wheel
lbest
Von Neumann
Lbest vs. Gbest
18


Gbest converges fast but may be trapped in a local optima.
Lbest is slower in convergence but has more chances to find
an optimal solution.

Most efficient neighborhood structure, in general,

Fully Informed PSO (FIPS):

Each individual is influenced by successes of all its neighbors.
Diversity Improvement
19

Based on lbest model.
Usually slow down the convergence rate.

Spatial Neighborhoods:


Partition particles based on spatial location.
Calculate the largest distance between any two particles.
Select neighboring particles according to ratio:

Selection threshold can be varied over time.



Start with small ratio (lbest) and gradually increase the ratio.
Diversity Improvement
20

Neighborhood Topologies:

In lbest model, all particles can exchange information
indirectly.
i



i+1
i+2
Average path length depends on the topology.
Topology significantly affects the performance (experimentally).
Randomly change some connections can change average path
length.
Diversity Improvement
21

Subpopulations:
 Previously
used in GA.
 Original swarm is partitioned to subpopulations.
 PSO is applied to each subpopulation.
 An interaction scheme is used for information sharing
between subpopulations.
 Each subpopulation can search the smaller region of
search space.
Discrete PSO
22

Binary PSO:
Introduces by kennedy and Eberhart.
 Each individual (particle) has to take a binary decision.

Previous state

predisposition
Predisposition is derived based on individual and group
performance:
Binary PSO (cont.)
23

determines a threshold in the probability function and
therefore should be bounded in the range of [0.0, 1.0].
1
Vid

state of the dth position in the string at time t:

Where
is a random number with a uniform distribution.
PSO Variants
24

Hybrid PSO


Adaptive PSO


Adaptation of PSO parameters for a better performance.
PSO in complex environments


Incorporate the capabilities of other evolutionary
computation techniques.
Multiobjective or constrained optimization problems or tracking
dynamic systems.
Other variants

variations to the original formulation to improve its performance.
Hybrid PSO
25

GA-PSO:

combines the advantages of swarm intelligence and a
natural selection mechanism.

jump from one area to another by the selection
mechanism  accelerating the convergence speed.

capability of “breeding”.

replacing agent positions with low fitness values, with
those with high fitness, according to a selection rate
Hybrid PSO
26

EPSO:
 Evolutionary
PSO
 Incorporates a selection procedure
 Self-adapting of parameters

The particle movement is defined as:
Hybrid PSO : EPSO
27

Mutation of weights and global best:
 Learning
parameters
can be either fixed or
dynamically changing as strategic parameters.

Survival Selection:
 Stochastic
tournament.
Hybrid PSO : EPSO
28
Hybrid PSO : DEPSO
29



Hybrid of Differential Evolution and PSO.
A DE operator applied to the particle’s best position to
eliminate the particles falling into local minima.
Alternation:
Original PSO algorithm at the odd iterations.
 DE operator at the even iterations.

Hybrid PSO : DEPSO
30

DE mutation on particle’s best positions:
Trial point:
For each dth dimention:

where k is a random integer value within [1,n] which
ensures the mutation in at least one dimension.
Hybrid PSO : DEPSO
31
Dynamic Tracking in PSO
32

The classical PSO is very effective in solving static
optimization problems but is not as efficient when
applied to a dynamic system in which the optimal value
may change repeatedly.

An adaptive approach has been introduced for this
problem:
 Detection of environmental changes:
changed-gbest-value
 fixed-gbest-values

 rerandomizing
a certain number of particles
Applications
33

Convenience of realization, properties of low constraint on
the continuity of objective function and joint of search space,
and ability of adapting to dynamic environment, make PSO be
applied in more and more fields.

Some PSO applications:





Electronics and electromagnetic
Signal, Image and video processing
Neural networks
Communication networks
…
34
Thanks for your attention