Simulation Techniques: Outline

Download Report

Transcript Simulation Techniques: Outline

Simulation Techniques: Outline
Lecture 1: Introduction:Statistical Mechanics, ergodicity,
statistical errors.
Lecture 2: Molecular Dynamics: integrators, boundary
conditions in space and time, long range potentials and
neighbor tables, thermostats and constraints, single body
and two body correlation functions, order parameters,
dynamical properties.
Lecture 3: Monte Carlo Methods: MC vs. MD, Markov
Chain MC, transition rules.
NB All material at beginner’s level. Interruptions welcomed!
May 2001
D. Ceperley Simulation Methods
1
Introduction to Simulation:
Lecture 1
•
•
•
•
•
•
Why do simulations?
Some history
Review of statistical mechanics
Newton’s equations and ergodicity
The FPU “experiment”
How to get something from simulations: statistical errors
May 2001
D. Ceperley Simulation Methods
2
Why do simulations?
• Simulations are the only general method for “solving”
many-body problems. Other methods involve
approximations and experts.
• Experiment is limited and expensive. Simulations can
complement the experiment.
• Simulations are easy even for complex systems.
• They scale up with the computer power.
May 2001
D. Ceperley Simulation Methods
3
Two Simulation Modes
A. Give us the phenomena and invent a model to mimic
the problem. The semi-empirical approach. But one
cannot reliably extrapolate the model away from the
empirical data.
B. Maxwell, Boltzmann and Schoedinger gave us the
model. All we must do is numerically solve the
mathematical problem and determine the properties.
(first principles or ab initio methods) This is what we
will talk about.
May 2001
D. Ceperley Simulation Methods
4
“The general theory of quantum mechanics is now almost
complete. The underlying physical laws necessary for the
mathematical theory of a large part of physics and the
whole of chemistry are thus completely known, and the
difficulty is only that the exact application of these laws
leads to equations much too complicated to be soluble.”
Dirac, 1929
May 2001
D. Ceperley Simulation Methods
5
Challenges of Simulation
Physical and mathematical underpinings:
• What approximations come in:
• Computer time is limitedfew particles for short periods
of time. (space-time is 4d. Moore’s Law implies lengths
and times will double every 6 years if O(N).)
• Systems with many particles and long time scales are
problematical.
• Hamiltonian is unknown, until we solve the quantum
many-body problem!
• How do we estimate errors? Statistical and systematic.
• How do we manage ever more complex codes?
May 2001
D. Ceperley Simulation Methods
6
Short history of Molecular Simulations
• Metropolis, Rosenbluth, Teller (1953) Monte Carlo
Simulation of hard disks.
• Fermi, Pasta Ulam (1954) experiment on ergodicity
• Alder & Wainwright (1958) liquid-solid transition in hard
spheres. “long time tails” (1970)
• Vineyard (1960) Radiation damage using MD
• Rahman (1964) liquid argon, water(1971)
• Verlet (1967) Correlation functions, ...
• Andersen, Rahman, Parrinello (1980) constant pressure
MD
• Nose, Hoover, (1983) constant temperature thermostats.
• Car, Parrinello (1985) ab initio MD.
May 2001
D. Ceperley Simulation Methods
7
Molecular Dynamics (MD)
• Pick particles, masses and potential.
• Initialize positions and momentum. (boundary conditions
in space and time)
• Solve F = m a to determine r(t), v(t).
Newton (1667-87)
• Compute properties along the trajectory
• Estimate errors.
• Try to use the simulation to answer physical questions.
May 2001
D. Ceperley Simulation Methods
8
What are the forces?
• Crucial since V(q) determines the quality of result.
• In this lecture we will use semi-empirical potentials:
potential is constructed on theoretical grounds but using
some experimental data.
• Common examples are Lennard-Jones, Coulomb,
embedded atom potentials. They are only good for simple
materials. We will not discuss these potentials for reasons
of time.
• The ab initio philosophy is that potentials are to be
determined directly from quantum mechanics as needed.
• But computer power is not yet adequate in general.
• A powerful approach is to use simulations at one level to
determine parameters at the next level.
May 2001
D. Ceperley Simulation Methods
9
Statistical Ensembles
• Classical phase space is 6N variables (pi, qi) and a
Hamiltonian function H(q,p,t).
• We may know a few constants of motion such as energy,
number of particles, volume...
• Ergodic hypothesis: each state consistent with our knowledge
is equally “likely”; the microcanonical ensemble.
• Implies the average value does not depend on initial
conditions.
• A system in contact with a heat bath at temperature T will be
distributed according to the canonical ensemble:
exp(-H(q,p)/kBT )/Z
• The momentum integrals can be performed.
• Are systems in nature really ergodic? Not always!
May 2001
D. Ceperley Simulation Methods
10
Ergodicity
• Fermi- Pasta- Ulam “experiment” (1954)
• 1-D anharmonic chain: V= [(q i+1-q i)2+ (q i+1-q i)3]
• The system was started out with energy with the lowest
energy mode. Equipartition would imply that the energy
would flow into the other modes.
• Systems at low temperatures never come into equilibrium.
The energy sloshes back and forth between various modes
forever.
• At higher temperature many-dimensional systems become
ergodic.
• Area of non-linear dynamics devoted to these questions.
May 2001
D. Ceperley Simulation Methods
11
Let us say here that the results of our computations were,
from the beginning, surprising us. Instead of a continuous
flow of energy from the first mode to the higher modes, all
of the problems show an entirely different behavior. …
Instead of a gradual increase of all the higher modes, the
energy is exchanged, essentially, among only a certain few.
It is, therefore, very hard to observe the rate of
“thermalization” or mixing in our problem, and this wa s
the initial purpose of the calculation.
Fermi, Pasta, Ulam (1954)
May 2001
D. Ceperley Simulation Methods
12
• Equivalent to exponential divergence of trajectories, or
sensitivity to initial conditions. (This is a blessing for
numerical work. Why?)
• What we mean by ergodic is that after some interval of
time the system is any state of the system is possible.
• Example: shuffle a card deck 10 times. Any of the 52!
arrangements could occur with equal frequency.
• Aside from these mathematical questions, there is always a
practical question of convergence. How do you judge if
your results converged? There is no sure way. Why? Only
“experimental” tests.
–
–
–
–
May 2001
Occasionally do very long runs.
Use different starting conditions.
Shake up the system.
Compare to experiment.
D. Ceperley Simulation Methods
13
Statistical ensembles
•
•
•
•
(E, V, N) microcanonical, constant volume
(T, V, N) canonical, constant volume
(T, P N) constant pressure
(T, V , ) grand canonical
• Which is best? It depends on:
– the question you are asking
– the simulation method: MC or MD (MC better for phase
transitions)
– your code.
• Lots of work in last 2 decades on various ensembles.
May 2001
D. Ceperley Simulation Methods
14
Definition of Simulation
• What is a simulation?
An internal state “S”
A rule for changing the state Sn+1 = T (Sn)
We repeat the iteration many time.
• Simulations can be
– Deterministic (e.g. Newton’s equations=MD)
– Stochastic (Monte Carlo, Brownian motion,…)
• Typically they are ergodic: there is a correlation time T. for
times much longer than that, all non-conserved properties
are close to their average value. Used for:
– Warm up period
– To get independent samples for computing errors.
May 2001
D. Ceperley Simulation Methods
15
Problems with estimating errors
• Any good simulation quotes systematic and
statistical errors for anything important.
• Central limit theorem: after “enough” averaging, any
statistical quantity approaches a normal distribution.
• One standard deviation means 2/3 of the time the correct
answer is within  of the estimate.
• Problem in simulations is that data is correlated in time. It
takes a “correlation” time to be “ergodic”
• We must throw away the initial transient and block
successive parts to estimate the mean value and error.
• The error and mean are simultaneously determined from
the data. We need at least 20 independent data points.
May 2001
D. Ceperley Simulation Methods
16
Estimating Errors
 Trace of A(t):
 Equilibration time.
 Histogram of values of A ( P(A) ).
 Mean of A (a).
 Variance of A ( v ).
 estimate of the mean:
A(t)/N
 estimate of the variance,
 Autocorrelation of A (C(t)).
 Correlation time (k ).
 The (estimated) error of the (estimated) mean ( ).
 Efficiency [= 1/(CPU time * error 2)]
May 2001
D. Ceperley Simulation Methods
17
Data Spork
Interactive code to
perform statistical
analysis of data
Shumway, Draeger,Ceperley
May 2001
D. Ceperley Simulation Methods
18
Statistical thinking is slippery
• “Shouldn’t the energy settle down to a constant”
– NO. It fluctuates forever. It is the overall mean which converges.
• “My procedure is too complicated to compute errors”
– NO. Run your whole code 10 times and compute the mean and
variance from the different runs
• “The cumulative energy has converged”.
– BEWARE. Even pathological cases have smooth cumulative
energy curves.
• “Data set A differs from B by 2 error bars. Therefore it
must be different”.
– This is normal in 1 out of 10 cases.
May 2001
D. Ceperley Simulation Methods
19
Molecular Dynamics:
Lecture 2
•
•
•
•
•
•
•
What to choose in an integrator
The Verlet algorithm
Boundary Conditions in Space and time
Neighbor tables
Long ranged interactions
Thermostats
Constraints
May 2001
D. Ceperley Simulation Methods
20
Criteria for an Integrator
•
•
•
•
simplicity (How long does it take to write and debug?)
efficiency (How fast to advance a given system?)
stability (what is the long-term energy conservation?)
reliability (Can it handle a variety of temperatures,
densities, potentials?)
• compare statistical errors (going as h-1/2 ) with time step
errors which go as hp where p=2,3...?
May 2001
D. Ceperley Simulation Methods
21
Characteristics of simulations.
• Potentials are highly non-linear with discontinuous higher derivatives
either at the origin or at the box edge.
• Small changes in accuracy lead to totally different trajectories. (the
mixing or ergodic property)
• We need low accuracy because the potentials are not very realistic.
Universality saves us: a badly integrated system is probably similar to
our original system. This is not true in the field of non-linear dynamics
or, in studying the solar system
• CPU time is totally dominated by the calculation of forces.
• Memory limits are not too important.
• Energy conservation is important; roughly equivalent to time-reversal
invariance.: allow 0.01kT fluctuation in the total energy.
May 2001
D. Ceperley Simulation Methods
22
The Verlet Algorithm
The nearly universal choice for an integrator is the Verlet (leapfrog)
algorithm
r(t+h) = r(t) + v(t) h + 1/2 a(t) h2 + b(t) h3 + O(h4) Taylor expand
r(t-h) = r(t) - v(t) h + 1/2 a(t) h2 - b(t) h3 + O(h4)
Reverse time
r(t+h) = 2 r(t) - r(t-h) + a(t) h2 + O(h4)
Add
v(t) = (r(t+h) - r(t-h))/(2h) + O(h2)
estimate velocities
1
2
3
4
5
6
7
8
9
10
11
12
13
Time reversal invariance is built in  the energy does not drift.
May 2001
D. Ceperley Simulation Methods
23
How to set the time step
• Adjust to get energy conservation to 1% of kinetic energy.
• Even if errors are large, you are close to the exact
trajectory of a nearby physical system with a different
potential.
• Since we don’t really know the potential surface that
accurately, this is satisfactory.
• Leapfrog algorithm has a problem with round-off error.
• Use the equivalent velocity Verlet instead:
r(t+h) = r(t) +h [ v(t) +(h/2) a(t)]
v(t+h/2) = v(t)+(h/2) a(t)
v(t+h)=v(t+h/2) + (h/2) a(t+h)
May 2001
D. Ceperley Simulation Methods
24
Higher Order Methods?
Statistical error for fixed CPU time.
• Higher order does
not mean better
• Problem is that
potentials are not
analytic
• Systematic errors
• study from
Berendsen 86
May 2001
D. Ceperley Simulation Methods
25
Spatial Boundary Conditions
Important because spatial scales are limited. What can we
choose?
• No boundaries; e.g. droplet, protein in vacuum. If droplet
has 1 million atoms and surface layer is 5 atoms thick
25% of atoms are on the surface.
• Periodic Boundaries: most popular choice because there
are no surfaces (see next slide) but there can still be
problems.
• Simulations on a sphere
• External potentials
• Mixed boundaries (e.g. infinite in z, periodic in x and y)
May 2001
D. Ceperley Simulation Methods
26
Periodic Boundary Conditions
Is the system periodic or
just an infinite array of
images?
How do you calculate
distances in a periodic
system?
May 2001
D. Ceperley Simulation Methods
27
Periodic distances
Minimum Image Convention: take the closest distance:|r|M = min ( r+nL)
-L
-L/2
0
L/2
L
x
• Potential is cutoff so that V(r)=0 for r>L/2 since force needs to be
continuous. How about the derivative?
• Image potential
VI =  v(ri-rj+nL)
• For long range potential this leads to the Ewald image potential. You
need a back ground and convergence method (more later)
May 2001
D. Ceperley Simulation Methods
28
Complexity of Force Calculations
• Complexity is defined as how a computer algorithm scales
with the number of degrees of freedom (particles)
• Number of terms in pair potential is N(N-1)/2  O(N2)
• For short range potential you can use neighbor tables to
reduce it to O(N)
– (Verlet) neighbor list for systems that move slowly
– bin sort list (map system onto a mesh and find neighbors from the
mesh table)
• Long range potentials with Ewald sums are O(N3/2) but
Fast Multipole Algorithms are O(N) for very large N.
May 2001
D. Ceperley Simulation Methods
29
LJ Force Computation
! Loop over all pairs of atoms.
do i=2,natoms
do j=1,i-1
!Compute distance between i and j.
r2 = 0
do k=1,ndim
dx(k) = r(k,i) - r(k,j)
!Periodic boundary conditions.
if(dx(k).gt. ell2(k)) dx(k) = dx(k)-ell(k)
if(dx(k).lt.-ell2(k)) dx(k) = dx(k)+ell(k)
r2 = r2 + dx(k)*dx(k)
enddo
May 2001
!Only compute for pairs inside radial cutoff.
if(r2.lt.rcut2) then
r2i=sigma2/r2
r6i=r2i*r2i*r2i
!Shifted Lennard-Jones potential.
pot = pot+eps4*ri6*(ri6-1)- potcut
!Radial force.
rforce = eps24*r6i*r2i*(2*r6i-1)
do k = 1 , ndim
force(k,i)=force(k,i) + rforce*dx(k)
force(k,j)=force(k,j) - rforce*dx(k)
enddo
endif
enddo
enddo
D. Ceperley Simulation Methods
30
Constant Temperature MD
• Problem in MD is how to control the temperature. (BC in
time.)
• How to start the system? (sample velocities from a
Gaussian distribution) If we start from a perfect lattice as
the system becomes disordered it will suck up the kinetic
energy and cool down. (v.v for starting from a gas)
• QUENCH method. Run for a while, compute kinetic
energy, then rescale the momentum to correct temperature,
repeat as needed.
• Nose-Hoover Thermostat controls the temperature
automatically by coupling the microcanonical system to a
heat bath
• Methods have non-physical dynamics since they do not
respect locality of interactions. Such effects are O(1/N)
May 2001
D. Ceperley Simulation Methods
31
Quench method
• Run for a while, compute kinetic energy, then rescale the
momentum to correct temperature, repeat as needed.
T
• Control is at best O(1/N)
vi ' 
vi
TI
2
mi vi
TI  
3( N  1)
i
May 2001
D. Ceperley Simulation Methods
32
Nose-Hoover thermostat
• MD in canonical distribution (TVN)
• Introduce a friction force (t)
SYSTEM
dp
 F(q, t)   (t ) p(t)
dt
T Reservoir
Dynamics of friction coefficient to get canonical ensemble.
Qd 
3N
2
1
  2 mv 
k bT
dt
2
Feedback restores
makes kinetic
energy=temperature
Q= “heat bath mass”. Large Q is weak coupling
May 2001
D. Ceperley Simulation Methods
33
Effect of thermostat
System temperature
fluctuates but how
quickly?
Q=1
Q=100
DIMENSION 3
TYPE argon 256 48.
POTENTIAL argon argon 1 1. 1. 2.5
DENSITY 1.05
TEMPERATURE 1.15
TABLE_LENGTH 10000
LATTICE 4 4 4 4
SEED 10
WRITE_SCALARS 25
NOSE 100.
RUN MD 2200 .05
May 2001
D. Ceperley Simulation Methods
34
•
•
•
Thermostats are needed in non-equilibrium situations
where there might be a flux of energy in or out of the
system.
It is time reversable, deterministic and goes to the
canonical distribution but:
How natural is the thermostat?
– Interactions are non-local. They propagate instantaneously
– Interaction with a single heat bath variable-dynamics can be
strange. Be careful to adjust the “mass”
REFERENCES
1.
2.
S. Nose, J. Chem. Phys. 81, 511 (1984); Mol. Phys. 52, 255 (1984).
W. Hoover, Phys. Rev. A31, 1695 (1985).
May 2001
D. Ceperley Simulation Methods
35
Constant Pressure
• To generalize MD, follow similar
procedure as for the thermostat
for constant pressure. The size of
the box is coupled to the internal
pressure
•Volume is coupled to
virial pressure
1 
P
 2 K   ri , j
3 
i j
d
dr
TPN



•Unit cell shape can also
change.
May 2001
D. Ceperley Simulation Methods
36
Parrinello-Rahman simulation
•500 KCl ions at 300K
•First P=0
•Then P=44kB
•System spontaneously
changes from rocksalt to
CsCl structure
May 2001
D. Ceperley Simulation Methods
37
•
•
•
•
Can “automatically” find new crystal structures
Nice feature is that the boundaries are flexible
But one is not guaranteed to get out of local minimum
One can get the wrong answer. Careful free energy
calculations are needed to establish stable structure.
•
All such methods have non-physical dynamics since they
do not respect locality of interactions.
• Non-physical effects are O(1/N)
REFERENCES
1. H. C. Andersen, J. Chem. Phys. 72, 2384 (1980).
2. M. Parrinello and A. Rahman, J. Appl. Phys. 52, 7158
(1981).
May 2001
D. Ceperley Simulation Methods
38
Constraints in MD
• Some problems require constraints:
– fixed bond length (water molecule). We would like to
get rid of high frequency motion (not interesting)
– In LDA-MD we need to keep wavefunctions
orthogonal.
– Putting constraints in “by hand” is difficult (for
example, working in polar coordinates)
• WARNING: constraints change the ensemble.
• SHAKE algorithm lets dynamics move forward without
constraint; then forces it back to satisfy the constraint.
• Let equation for constraint be: (R) =0
• Bond constraint is (R) =|r i -r j|2 - a2
May 2001
D. Ceperley Simulation Methods
39
•Add Lagrange multiplier (t) Equation of motion is :
m a(t) = F - (t) (R,t)
•Using leapfrog method:
R(t+h) =2R(t)-R(t-h) +h2 F(t)/m - h2(t) (R,t)
•What should we use for (t)?
• Newton’s method so that (R(t+h))=0.

'   

•Iterate until convergence.
•With many constraints: cycle
through constraint equations.
May 2001
D. Ceperley Simulation Methods
d
40
Brownian dynamics
•
•
•
•
Put a system in contact with a heat bath
Will discuss how to do this later.
Leads to discontinuous velocities.
Not necessarily a bad thing, but requires some physical
insight into how the bath interacts with the system in
question.
• For example, this is appropriate for a large molecule
(protein or colloid) in contact with a solvent. Other heat
baths in nature are given by phonons and photons,…
May 2001
D. Ceperley Simulation Methods
41
Monitoring the simulation
•
•
•
•
Static properties: pressure, specific heat etc.
Density
Pair correlation in real space and fourier space.
Order parameters: How to tell a liquid from a solid
May 2001
D. Ceperley Simulation Methods
42
Thermodynamic properties
•
•
•
•
•
•
Internal energy=kinetic energy + potential energy
Kinetic energy is kT/2 per momentum
Specific heat = mean squared fluctuation in energy
pressure can be computed from the virial theorem.
compressibility, bulk modulus, sound speed
But we have problems for the basic quantities of entropy
and free energy since they are not ratios with respect to the
Boltzmann distribution. We will discuss this later.
May 2001
D. Ceperley Simulation Methods
43
May 2001
D. Ceperley Simulation Methods
44
Microscopic Density
(r) = < i (r-r i) >
Or you can take its Fourier Transform:
 k = < i exp(ikri) >
(This is a good way to smooth the density.)
• A solid has broken symmetry (order parameter). The
density is not constant.
• At a liquid-gas transition the density is also inhomgeneous.
• In periodic boundary conditions the k-vectors are on a
grid: k=2/L (nx,ny,nz) Long wave length modes are
absent.
• In a solid Lindemann’s ratio gives a rough idea of melting:
u2= <(ri-zi)2>/d2
May 2001
D. Ceperley Simulation Methods
45
Order parameters
• A system has certain symmetries: translation invariance.
• At high temperatures one expect the system to have those
same symmetries at the microscopic scale. (e.g. a gas)
• BUT as the system cools those symmetries are broken. (a
gas condenses).
• At a liquid gas-transition the density is no longer fixed:
droplets form. The density is the order parameter.
• At a liquid-solid transition, both rotational symmetry and
translational symmetry are broken
• The best way to monitor the transition is to look for the
dynamics of the order parameter.
May 2001
D. Ceperley Simulation Methods
46
May 2001
D. Ceperley Simulation Methods
47
Electron Density during exchange
2d Wigner crystal (quantum)
May 2001
D. Ceperley Simulation Methods
48
Snapshots of densities
Liquid or crystal or glass?
May 2001
Blue spots are defects
D. Ceperley Simulation Methods
49
Density distribution within a helium droplet
During addition of molecule, it travels from the surface to the interior.
Red
is high density, blue
low density of helium
May 2001
D. Ceperley Simulation Methods
50
Pair Correlation Function, g(r)
Primary quantity in a liquid is the probability distribution of pairs of
particles. Given a particle at the origin what is the density of
surrounding particles
g(r) = < i<j  (ri-rj-r)> (2 /N2)
Density-density correlation
Related to thermodynamic properties.
N
3
V   v(rij ) 
d
rv(r)g(r)

2
i j
May 2001
D. Ceperley Simulation Methods
51
g(r) in liquid and solid helium
• First peak is at inter-particle
spacing. (shell around the
particle)
• goes out to r<L/2 in periodic
boundary conditions.
May 2001
D. Ceperley Simulation Methods
52
(The static) Structure Factor S(K)
• The Fourier transform of the pair correlation function is the
structure factor
S(k) = <|k|2>/N
(1)
S(k) = 1 +  dr exp(ikr) (g(r)-1)
(2)
• problem with (2) is to extend g(r) to infinity
• This is what is measured in neutron and X-Ray scattering
experiments.
• Can provide a direct test of the assumed potential.
• Used to see the state of a system:
liquid, solid, glass, gas? (much better than g(r) )
• Order parameter in solid is G
May 2001
D. Ceperley Simulation Methods
53
• In a perfect lattice S(k) will be non-zero only on the
reciprocal lattice vectors G: S(G) = N
• At non-zero temperature (or for a quantum system) this
structure factor is reduced by the Debye-Waller factor
S(G) = 1+ (N-1)exp(-G2u2/3)
• To tell a liquid from a crystal we see how S(G) scales as
the system is enlarged. In a solid, S(k) will have peaks that
scale with the number of atoms.
• The compressibility is given by: T  S(0)/(kBT  )
• We can use this is detect the liquid-gas transition since the
compressibility should diverge as k approaches 0. (order
parameter is density)
May 2001
D. Ceperley Simulation Methods
54
May 2001
Crystal
liquid
D. Ceperley Simulation Methods
55
Here is a snapshot of a binary mixture.
What correlation function would be
important?
May 2001
D. Ceperley Simulation Methods
56
• In a perfect lattice S(k) will be non-zero only on the
reciprocal lattice vectors: S(G) = N
• At non-zero temperature (or for a quantum system) this
structure factor is reduced by the Debye-Waller factor
S(G) = 1+ (N-1)exp(-G2u2/3)
• To tell a liquid from a crystal we see how S(G) scales as
the system is enlarged. In a solid, S(k) will have peaks that
scale with the number of atoms.
• The compressibility is given by:   S(0)/(k T 
T
B
We can use this is detect the liquid gas transition since the
compressibility should diverge. (order parameter is
density)
May 2001
D. Ceperley Simulation Methods
57
)
Dynamical Properties
• Fluctuation-Dissipation theorem:

()   dt e
0
it 
dA(0)
 B(t )

dt
Here A e-iwt is a perturbation and  (w) e-iwt is the response
of B. We calculate the average on the lhs in equilibrium
(no external perturbation).
• Fluctuations we see in equilibrium are equivalent to how a
non-equilibrium system approaches equilibrium. (Onsager
regression hypothesis)
• Density-Density response function is S(k,w) can be
measured by scattering and is sensitive to collective
motions.
May 2001
D. Ceperley Simulation Methods
58
Diffusion Constant
•Diffusion constant is defined by Fick’s law and controls how
systems mix
d
 D ( r , t )
2
dt
•Microscopically we calculate:
D
 (r (0) - r (t ))2  /(6t )

1
3


0
dt  v(0)v(t ) 
•Alder-Wainwright discovered long-time tails on the velocity
autocorrelation function so that the diffusion constant does not
exist in 2D
May 2001
D. Ceperley Simulation Methods
59
Mixture simulation with CLAMPS
Initial condition
May 2001
D. Ceperley Simulation Methods
Later
60
Transport Coefficients
•
•
•
•
Diffusion:
Viscosity:
Heat transport:
Electrical Conductivity:
Particle flux
Stress tensor
energy current
electrical current
These can also be evaluated with non-equilibrium
simulations use thermostats to control.
• Impose a shear flow
• Impose a heat flow
May 2001
D. Ceperley Simulation Methods
61
CLAMPS input file
TYPE argon 864 48.
POTENTIAL argon argon 1 1. 1. 2.5
DENSITY 0.35
TEMPERATURE 1.418
LATTICE
SEED 10
WRITE_SCALARS 10
WRITE_COORD 20
RUN MD 300 .032
May 2001
Setup system
Output intervals
Executes 300 dynamics steps
D. Ceperley Simulation Methods
62
Monte Carlo:
Lecture 3
•
•
•
•
•
•
What is Monte Carlo?
Theory of Markov processes
Detailed balance and transition rules.
Grand Canonical and polymer Reptation
Can Monte Carlo tell us about real dynamics?
Why Monte Carlo instead of MD?
May 2001
D. Ceperley Simulation Methods
63
Monte Carlo
• Invented in Los Alamos in 1944 and named after the
famous casino.
• A way of doing integrals by using random numbers:
f (R )
I   dRf (R )   dRP (R )
P( R )
f (R )
f (Ri)
1

 lim 
P( R )
N  N i P( Ri )
P(R)=sampling function
Convergence guaranteed
by the Central Limit
Theorem
•If P(R)f(R) then variance0
•faster way of doing integrals for dimensions > 4
May 2001
D. Ceperley Simulation Methods
64
Theory of Markov processes
• Markov process is a random walk through phase space:
s1s2  s3  s4 …
• If it is ergodic, then it will converge to a unique stationary
distribution (only one eigenvalue)
• Detailed balance:  (s) P(s s’) =  (s’)P (s’ s ).
Rate balance from s to s’.
• We achieve detailed balance by rejecting moves.
Acceptance probability is:
 T(s'  s)(s' ) 
min1,

 T(s  s' )(s) 
May 2001
D. Ceperley Simulation Methods
65
“Classic” Metropolis method
Metropolis Rosenbluth Teller (1953) method:
• Move from s to s’ with probability T(ss’)= constant
• Accept with move with probability:
a(ss’)= min [ 1 , exp ( - (E(s’)-E(s))/kBT ) ]
• Repeat many times
Given ergodicity, the distribution of s will be the canonical
distribution:
May 2001
(s) = exp(-E(s)/kBT)/Z
D. Ceperley Simulation Methods
66
MONTE CARLO CODE
call initstate(sold)
vold = action(sold)
LOOP{
callsample(sold,snew,pnew,1)
vnew = action(snew)
call sample(snew,sold,pold,0)
a=exp(-vnew+vold)*pold/pnew
if(a.gt.sprng()) {
sold=snew
vold=vnew
naccept=naccept+1}
call averages(sold)
}
May 2001
D. Ceperley Simulation Methods
Initialize the state
Sample snew
Trial action
Find prob. of going
backward
Acceptance prob.
Accept the move
Collect statistics
67
How to sample

Snew = Sold +  (sprng - 0.5)
Uniform distribution in a cube
•It is more efficient to move one particle at a time
because only the energy of that particle comes in and
the movement and acceptance ratio will be larger.

A  s  s '  exp   V  s '  V  s  
May 2001



 exp     v  ri ' rj   v  ri  rj  
j i


D. Ceperley Simulation Methods


68
Monte Carlo Rules
• Optimize the efficiency= error2 (computer time) with
respect to any sampling parameters.
• Measure acceptance ratio. Set to roughly1/2 by varying
the “step size”
• Accepted and rejected states count the same!
• Exact: no time step error, no ergodic problems in principle
but no dynamics.
May 2001
D. Ceperley Simulation Methods
69
Optimizing the moves
• Any transition rule is allowed as long as you can go
anywhere in phase space with a finite number of them.
(ergodic)
• Try to find a T(s  s’)   (s’). If you can the acceptance
ratio will be 1.
• We can use the forces to push the walk in the right
direction. Taylor expand about the current point.
V(r)=V(r0)-F(r)(r-ro)
• Set T(s  s’)  exp[ -(V(r0)- F(r0)(r-ro))]
• Leads to Force-Bias Monte Carlo
• Related to Brownian motion (Smoluchowski Eq.)
May 2001
D. Ceperley Simulation Methods
70
Brownian Dynamics
Consider a big molecule in a solvent. In the high viscosity
limit the “master equation” is:
( R , t )
 D 2( R , t )   D[F( R )( R , t )]
t
R ( t  h )  R ( t )  h DF( R ( t ))  ( t )
( t )  0
( t ) 2  2Dh
Enforce detailed balance by rejections! (hybrid method)
 (R 'R  DhF(R ))2
T(R  R ' )  c exp 
2Dh




Also the equation for Diffusion Quantum Monte Carlo!
May 2001
D. Ceperley Simulation Methods
71
Other Ensembles in MC
• Monte Carlo can handle discrete and continuous variables
at the same time.
• This allows us to treat other ensembles. For example
consider the (, V, T) ensemble. The number of particles is
variable.
• We consider moves that change the number of particles by
adding or subtracting 1 and accepting or rejecting the
move.
May 2001
D. Ceperley Simulation Methods
72
Free Energy Computation
e
 F/kT
e
 F0 /kT
 dRe
 V(R)/kT
•Free energy ( and the entropy) is more difficult since ensemble
average are of ratios only.
•Special methods are needed: e.g. thermodynamic integration.

dt
F(T)  F0 (T)   (E(t )  E 0 (t ))

0
Integration path
P
•Find shortest path to known state.
T
May 2001
D. Ceperley Simulation Methods
73
Polymer Simulations
• Polymers move very slowly because of entanglement.
• A good algorithm is “reptation” Cut off one end and grow
onto the other end.
• Acceptance probability is change in potential
• Simple moves go quickly through polymer space.
• Completely unphysical dynamics or is it? This may be how
entangled polymers actually move.
May 2001
D. Ceperley Simulation Methods
74
Continuum of Methods
Path Integral Monte Carlo
Ab initio Molecular Dynamics
semi-empirical Molecular Dynamics
Langevin Equation
Brownian Dynamics
Metropolis Monte Carlo
Kinetic Monte Carlo
The general procedure is to average out fast
degrees of freedom.
May 2001
D. Ceperley Simulation Methods
75
Monte Carlo Dynamics
• MC introduced as a way of sampling configuration space
so that MC dynamics is purely fictitious.
• Brownian dynamics and reptation are a model dynamics.
• If we satisfy certain conservation laws in the dynamics
(e.g. locality of moves) then the dynamics is a possible
model with “local thermodynamic equilibrium” .
• For Ising model we speak of Kawasaki Dynamics- this
respects the locality of spin.
• MC dynamics is useful because it is so much faster than
MD.
• MC dynamics neglects long time correlations which can
arise from hydrodynamical effects.
May 2001
D. Ceperley Simulation Methods
76
Comparison between MC and MD
Which is better?
•
MD can compute dynamics. MC has a kinetics under user control, but
dynamics is not necessarily physical. MC dynamics is useful for
studying long-term diffusive process.
• MC is simpler: no forces, no time step errors and a direct simulation of
the canonical ensemble.
• In MD you can only work on how to make the CPUtime/physical time
faster. In MC you can invent better transition rules and ergodicity is
less of a problem. MD is sometimes very effective in highly
constrained systems.
• MC is more general, it can handle discrete degrees of freedom (e. g.
spin models, quantum systems), grand canonical ensemble...
So you need both! The best is to have both in the same code
so you can use MC to warm up the dynamics.
May 2001
D. Ceperley Simulation Methods
77
“An intelligent being who, at a given moment, knows all the
forces that cause nature to move and the positions of the
objects that it is made from, if also it is powerful enough to
analyze this data, would have described in the same
formula the movements of the largest bodies of the
universe and those of the lightest atoms. Although
scientific research steadily approaches the abilities of this
intelligent being, complete prediction will always remain
infinitely far away.”
Laplace, 1820
Although detailed predictions may be impossible, is
it possible to exactly calculate the properties of
materials?
May 2001
D. Ceperley Simulation Methods
78