Quantifying Chaos

Download Report

Transcript Quantifying Chaos

Quantifying Chaos
1.
2.
3.
4.
5.
6.
7.
8.
9.
Introduction
Time Series of Dynamical Variables
Lyapunov Exponents
Universal Scaling of the Lyapunov Exponents
Invariant Measures
Kolmogorov-Sinai Entropy
Fractal Dimensions
Correlation Dimension & a Computational History
Comments & Conclusions
1. Introduction
Why quantify chaos?
• To distinguish chaos from noise / complexities.
• To determine active degrees of freedom.
• To discover universality classes.
• To relate chaotic parameters to physical quantities.
2. Time Series of Dynamical Variables
• (Discrete) time series data:
– x(t0), x(t1), …, x(tn)
– Time-sampled (stroboscopic) measurements
– Poincare section values
• Real measurements & calculations are always discrete.
• Time series of 1 variable of n-D system :
–
If properly chosen, essential features of system can be re-constructed:
• Bifurcations
• Chaos on-set
–
Choice of sampling interval is crucial if noise is present (see Chap 10)
• Quantification of chaos:
– Dynamical:
• Lyapunov exponents
• Kolmogorov-Sinai (K-S) Entropy
– Geometrical:
• Fractal dimension
• Correlation dimension
• Only 1-D dissipative systems are discussed in this chapter.
9.3. Lyapunov Exponents
Time series:
xi  x  ti 
Given i & j, let
ti  t0  i 
d k  x j k  xi k
System is chaotic if
dk  d0 e k 
Lyapunov exponent   
with
 0
1 dn
ln
n d0
Technical Details:
• Check exponential dependence.
• λ is x dependent → λ = Σiλ(xi) / N .
• N can’t be too large for bounded systems.
• λ = 0 for periodic system.
• i & j shouldn’t be too close.
• Bit- version: dn = d0 2nλ
Logistic Map
9.4. Universal Scaling of the Lyapunov Exponents
Period-doubling route to chaos:
Logistic map: A = 3.5699…
LyapunovExponents.nb
λ < 0 in periodic regime.
λ = 0 at bifurcation point.
(period-doubling)
λ > 0 in chaotic regime.
λ tends to increase with A
→ More chaotic as A increases.
Huberman & Rudnick: λ(A > A) is universal for periodic-doubling systems:
  A   0  A  A 
  0  A  A 
ln 2
ln 
0.445
  4.669
= Feigenbaum δ
λ ~ order parameter
A  A ~ T  TC
λ0 = 0.9
Derivation of the Universal Law for λ
Logistic map
• Chaotic bands merge via “period-undoubling” for A > A.
• Ratio of convergence tends to Feigenbaum δ.
• Let 2m bands merge to 2m1 bands at A = Am .
• Reminder: 2m bands bifurcate to 2m+1 bands at A = Am .
Divergence of trajectories in 1 band :
Divergence of trajectories among
2m
dn  d 0 e n
band :
d 2m  d 0 e
2m 
 d0 e 
λ = effective Lyapunov exponent denoting 2m iterations as one.
2 
m
λ = Lyapunov exponent for
f
If λ is the same for all bands, then
  Am  

2m
2

Ex.2.4-1: Assuming δn = δ gives
 A  An   n   A2  A1    1
2

Similarly:  Am  A   m   A1  A2 
 A
 1
A
m

A

 A

m

m
→
 Am  A 
  ln 


A


  Am  
i.e.,

2m
ln
A
Am  A
1
ln 
  2
→
m
A
Am  A
ln 
 Am  A 
 ln 2  log 2 


A


 Am  A 



A


  A   0  A  A 
ln 2
ln 
 
1
ln 
 Am  A 
 log2 


A


ln 2
ln 

 0   A
ln 2
ln 
ln 2
ln 
9.5. Invariant Measures
For systems of large DoFs, geometric analysis becomes unwieldy.
Alternative approach: Statistical methods.
Basic quantity of interest: Probability of trajectory to pass through
given region of state space.
• Definition of Probability
• Invariant Measures
• Ergodic Behavior
Definition of Probability
Consider an experiment with N possible results (outcomes).
After M runs (trials) of the experiment, let there be mi occurrences of the ith outcome.
The probability pi of the ith outcome is defined as
pi 
p
i 1
where
M
N
→
N
mi
i
m
i 1
1
i
M
( Normalization )
If the outcomes are described by a set of continuous parameters x, N = .
mi are finite → M =  and pi = 0  i.
Remedy:
Divide range of x into cells/bins.
mi = number of outcomes belonging to the ith cell.
Invariant Measures
For an attractor in state space:
1. Divide attractor into cells.
2. 1-D case:
pi  mi / M.
Set {pi} is a natural probability measure if it is independent of (almost all) IC.
pi    xi  
 p  x  dx
= probability of trajectory visiting cell i.
cell i
p(x) dx = probability of trajectory visiting interval [ x, x+dx ] or [ xdx/2 , x+dx/2 ].
Let xn1  f  xn  then μ is an invariant probability measure if
Treating M as total mass → p(x) = ρ(x)
  x    f  x
Example: Logistic Map, A = 4
From § 4.8: For A = 4, logistic map is equivalent to Bernoulli shift.
xn1  4 xn 1  xn 
1
1
0
0
→ n1  2n mod 1
1   dx p  x    d P  
P    1

1

cos
1
with
d
  dx
P  
dx
0
1
1  2x 
→
→
p  x 
1


x
1
1  cos  
2
p  x 
d
P  
dx
2
1  1  2 x 
2

1

x 1  x 
Numerical:
1024 iterations into 20 bins
Ergodic Behavior
Time average of B(x):
1
B t
T
t0 T

dt B  x  t  
t0
1
N
N
 B  x  ti 
ti  t0  i
i1
Bt should be independent of t0 as T → .
Ensemble average of B(x):
B
p
  dx p  x  B  x 
N
Bx  p
i1
i
Comments:
• Bp is meaningful only for invariant probability measures.
• p(x) may not exist, e.g., strange attractors.
System is ergodic if Bt = Bp .
i
T
N
Example: Logistic Map, A = 4
Local values of the Lyapunov exponent:
  x   ln f   x 
 ln 4 1  2x 
Ensemble average value of the Lyapunov exponent:
   dx p  x  ln f   x 
1
  dx
0
1

x 1  x 
1
  d ln 4cos 
0
 ln 2
x
ln 4 1  2 x 
1
1  cos  
2
( same as the Bernoulli shift )
Same as that calculated by time average (c.f. §5.4):
1

N
N
 ln
i 1
f   xi 
9.6. Kolmogorov-Sinai Entropy
Brief Review of Entropy:
• Microcanonical ensemble (closed, isolated system in thermal equilibrium):
S = k ln N =  k ln p
p = 1/N
• Canonical ensemble (small closed subsystem):
S =  k Σi pi ln pi
Σi pi = 1
• 2nd law: ΔS  0 for spontaneous processes in closed isolated system.
→ S is maximum at thermodynamic equilibrium
• Issue: No natural way to count states in classical mechanics.
→ S is defined only up to an constant ( only ΔS physically meaningful )
Quantum mechanics: phase space volume of each state = hn , n = DoF.
Entropy for State Space Dynamics
1.
Divide state space into cells (e.g., hypercubes of volume LDof ).
2.
For dissipative systems, replace state space with attractors.
3.
Start evolution for an ensemble of I.C.s (usually all located in 1 cell).
4.
After n time steps, count number of states in each cell.
M r  k
pr ln pr
k = Boltzmann constant

r
M
r
1
ln p
p
lim p ln p  lim 1  lim
  lim p  0

2
p0
p0 p
p0
p0  p
Sn   k
Note:
Mr
M
ln
• Non-chaotic motion:
• Number of cells visited (& hence S ) is independent of t & M on the
macroscopic time-scale.
• Chaotic motion:
• Number of cells visited (& hence S ) increases with t but independent of M.
• Random motion:
• Number of cells visited (& hence S ) increases with both t & M
Only ΔS is physically significant.
Kolmogorov-Sinai entropy rate = K-S entropy = K is defined as
dS
K
dt
1
 lim lim lim
  0 L  0 N   N
N 1

n0
lim lim
 Sn1  Sn   lim
0 L0 N 
S N  S0
N
For iterated maps or Poincare sections, τ= 1 so that
1
K  lim lim
L0 N  N
N 1
  Sn1  Sn 
 lim lim
n0
L0 N 
S N  S0
N
Nn  N0 e  n 
E.g., if the number of occupied cells Nn is given by
and all occupied cells have the same probability
then
1
1
ln

Nn
N n cells N n
Sn   k
 k ln N n
pr 
1
Nn
 k  ln N0   n  
k  ln N 0   N    k  ln N 0 
K  lim
k
N 
N
Pesin identity:
K k

i
i
λi = positive average Lyapunov exponents
Alternative Definition of the K-S Entropy
See Schuster
1.
Map out attractor by running a single trajectory for a long time.
2.
Divide attractor into cells.
3.
Start a trajectory of N steps & mark the cell it’s in at t = nτas b(n).
4.
Do the same for a series of other slightly different trajectories starting
from the same initial cell.
5.
Calculate the fraction p(i) of trajectories described by the ith cell sequence.
Then
K  lim
N 
S N  S0
N
where
S N  k
 p i  ln p i 
i
Exercise: Show that both definitions of K give roughly the same
result for all 3 types of motions discussed earlier.
b 0 
9.7. Fractal Dimensions
Geometric aspects of attractors
Distribution of state space points of a long time series
→
Dimension of attractor
Importance of dimensionality:
• Determines range of possible dynamical behavior.
• Dictates long-term dynamics.
• Reveals active degrees of freedom.
For a dissipative system :
• D < d,
D  dimension of attractor,
d  dimension of state space.
• D* < D,
D* = dimension of attractor on Poincare section.
For a Hamiltonian system,
• D  d  1,
D = dimension of points generated by one trajectory
( trajectory is confined on constant energy surface )
• D* < D,
D* = dimension of points on Poincare section.
• Dimension is further reduced if there are other constants of motion.
Example: 3-D state space
 f  x   0
x
x  f  x
→ attractor must shrink to a point or a curve
→ system can’t be quasi-periodic ( no torus )
→ no q.p. solutions for the Lorenz system.
Dissipative system:
Strange attractor = Attractor with fractional dimensions (fractals)
Caution: There’re many inequivalent definitions of fractal dimension.
See J.D.Farmer, E.Ott, J.A.Yorke, Physica D7, 153-80 (1983)
Capacity ( Box-Counting ) Dimension Db
• Easy to understand.
• Not good for high d systems.
1st used by Komogorov
N  R   lim k R  Db
R 0
N(R) = Number of boxes of side R that covers the object
ln N  R   lim  ln k  Db ln R 
R 0
 ln N  R  ln k 
Db  lim  


R 0
ln
R
ln
R


ln N  R 
Db   lim
R 0
ln R
Example 1: Points in 2-D space
A single point:
Box = square of sides R.
N  R  1
→
Set of N isolated points:
N  R  N
→
Db   lim
R 0
ln1
0
ln R
Box = square of sides R.
R = ½ (minimal distance between points).
Db   lim
R 0
ln N
0
ln R
Example 2: Line segment of length L in 2-D space
Box = square of sides R.
N  R 
L
R
L
 ln L 
 1  1
Db   lim R   lim 
R  0 ln R
R  0 ln R


ln
→
Example 3: Cantor Set
Starting with a line segment of length 1,
take out repeatedly the middle third of each remaining segment.
Caution:
Given M finite, set consists of 2M line segments → Db = 1.
Given M infinite, set consists of discrete points → Db = 0.
 Limits M →  and R → 0 must be taken simultaneously.
At step M, there remain 2M segments, each of length 1/3M.
Db   lim
M 
ln 2M
1
ln  
3
M

ln 2
ln 3
0.63
Measure of the Cantor set:
M
M 1
Length of set  lim 2    0
M 
3

M 1  1 
 1  2  
 3
M 1
1 1
 1 
3 1 2
3
Ex. 9.7-5: Fat Fractal
0
M
1  M 1
 1  2  
3 M 0  3
M
Example 4: Koch Curve
Start with a line segment of length 1.
a) Construct an equilateral triangle with the middle third segment as base.
b) Discard base segment.
Repeat a) and b) for each remaining segment.
At step M, there exists 4M segments of length 1/3M each.
Db   lim
M 
ln 4M
1
ln  
3
M

ln 4
ln 3
1.26
Types of Fractals
Fractals with self-similarity:
small section of object, when magnified, is identical with the whole.
• Fractals with self-affinity:
same as self-similarity, but with anisotropic magnification.
• Deterministic fractals:
Fixed construction rules.
• Random fractals:
Stochastic construction rules (see Chap 11).
Fractal Dimensions of State Space Attractors
Difficulty:
R → 0 not achievable due to finite precision of data.
Remedy:
Alternate definition of fractal dimension (see §9.8)
Logistic map at A , renormalization method: Db = 0.5388… (universal)
Elementary estimates:
Consider A → A+ ( from above ).
Sarkovskii’s theorem → chaotic bands undergo doubling-splits as A → A+ .
Feigenbaum universality → splitted bands are narrower by 1/α and 1/α2 .
Assume points in each band distributed uniformly → splitting is Cantor-set like.
1st estimate: R decreases by factor 1/α at each splitting.
Rn 
Rn 1


R1

→
n 1
ln 2n
ln 2
Db   lim

n   ln R /  n 1
1
 ln 
0.4498...
2nd estimate:
1 1 1 
Rn    2  Rn 1
2  
→
Db   lim
n
ln 2n
  1  1 1  n1 
ln  R1    2   
  2      

ln 2
 1  1 1 
ln    2  
 2   
0.543...
Db procedure dependent.
An infinity of dimensional measures needed to characterize object (see Chap 10)
The Similarity Dimensions for Nonuniform Fractals
9.8. Correlation Dimension & a Computational History
9.9. Comments & Conclusions