Transcript R m

MODELING MATTER AT
NANOSCALES
3.
Empirical classical PES and typical
procedures of optimization
3.03. Monte Carlo and other
heuristic procedures
1
Exploring n-dimensional space
Exploration of energy
landscapes of n-dimensional
hypersurfaces
Fair raffles must allow equal opportunities for everyone in any
contest.
Exploration of energy
landscapes of n-dimensional
hypersurfaces
Fair raffles must allow equal opportunities for everyone in any
contest.
A random selection of numbers related with points in a given
space allow a fair guided visit of all “regions” or subspaces of such
real or virtual space regardless of its position and simulates a
complete exploration.
Exploration of energy
landscapes of n-dimensional
hypersurfaces
Fair raffles must allow equal opportunities for everyone in any
contest.
A random selection of numbers related with points in a given
space allow a fair guided visit of all “regions” or subspaces of such
real or virtual space regardless of its position and simulates a
complete exploration.
A quite comprehensive energy landscape of a given space can be
obtained by evaluating the corresponding hypersurface in several
spatial points provided by a random distribution.
Monte Carlo simulations
Monte Carlo simulations are considered those that can be used
to numerically evaluate energy functions of molecular systems
by means of randomly explored points of the corresponding
space.
Monte Carlo simulations
Monte Carlo simulations are considered those that can be used
to numerically evaluate energy functions of molecular systems
by means of randomly explored points of the corresponding
space.
It allows the simulation of even macroscopic system departing
from the atomic and molecular “bricks” that appear as more
probable.
Monte Carlo simulations
Monte Carlo simulations are considered those that can be used
to numerically evaluate energy functions of molecular systems
by means of randomly explored points of the corresponding
space.
It allows the simulation of even macroscopic system departing
from the atomic and molecular “bricks” that appear as more
probable.
It is based on the principle that any static configuration of a
system of particles determine its state functions.
Monte Carlo simulations
Let us consider a classical system with N particles, being the ith
of them associated with a ri generalized coordinate in the
Euclidean space.
Monte Carlo simulations
Let us consider a classical system with N particles, being the nth
of them associated with a rn generalized coordinate in the
Euclidean space.
The more probable or expected value of a given state function,
as the internal energy, will be:
E   E (R m ) P (R m )drnm
where E(Rm) is the evaluation of the configurational energy
and P(Rm) is the probability (between 0 and 1) of the system
in a given Rm point of the configuration space of N of n
particles.
Monte Carlo simulations
The evaluation of this integral in the hyperspace by the Monte
Carlo method consists in the random generation of Rm points,
evaluating E(Rm) and taking into account a “validity criterion”
depending on the P probability at this point.
Monte Carlo simulations
The evaluation of this integral in the hyperspace by the Monte
Carlo method consists in the random generation of Rm points,
evaluating E(Rm) and taking into account a “validity criterion”
depending on the P probability at this point.
As large was the number of M “visited” points, the “stability”
of <E> improves, because the average is more realistic.
Monte Carlo simulations
The evaluation of this integral in the hyperspace by the Monte
Carlo method consists in the random generation of Rm points,
evaluating E(Rm) and taking into account a “validity criterion”
depending on the P probability at this point.
As large was the number of M “visited” points, the “stability”
of <E> improves, because the average is more realistic.
However, the optimal criterion is to limit the exploration of the
configuration space to the minimal number of structures that
could give appropriate and stable (non fluctuating) results.
Monte Carlo simulations
The following conditions are necessary to solve the problem:
• Availability of an algorithm for random number generation
serving to select every new point in the space given by Rm.
Monte Carlo simulations
The following conditions are necessary to solve the problem:
• Availability of an algorithm for random number generation
serving to select every new point in the space given by Rm.
• Knowing or establishing conventions for probability
criteria P(Rm) of each point.
Monte Carlo simulations
The following conditions are necessary to solve the problem:
• Availability of an algorithm for random number generation
serving to select every new point in the space given by Rm.
• Knowing or establishing conventions for probability
criteria P(Rm) of each point.
• Availability of functional forms of E(Rm) to be evaluated in
each point (an appropriate hypersurface).
Monte Carlo simulations
The Metropolis algorithm takes N particles (atoms or molecules)
in a given configuration, with an appropriate V volume and a
given T temperature.
Monte Carlo simulations
The Metropolis algorithm takes N particles (atoms or molecules)
in a given configuration, with an appropriate V volume and a
given T temperature.
This case is treated as a canonical ensemble (NVT) although
other types of ensembles can also be used (as the
microcanonical, maintaining constant the internal energy, in
place of temperature).
Procedure
The total energy of the system in a starting guess geometry is
evaluated by means of the selected potential. Then, the
following iterative steps are followed:
1
Random selection of a n component particle.
Procedure
The total energy of the system in a starting guess geometry is
evaluated by means of the selected potential. Then, the
following iterative steps are followed:
1
Random selection of a n component particle.
2
Moving the n particle or reference body to a
randomly selected new coordinate of the
given space. All coordinates of the particle
are independently displaced in all and each
dimension with respect to the coordinate
origin of the whole system. The selected
particle is also randomly rotated along the
corresponding internal axis.
Procedure
3
Evaluation of the system’s total energy in
the new Rm geometry
If the new energy value is lower that that of
the previous step the new geometry is
accepted with a total probability, P(Rm) = 1.
Procedure
3
Evaluation of the system’s total energy in
the new Rm geometry
If the new energy value is lower that that of
the previous step the new geometry is
accepted with a total probability, P(Rm) = 1.
If the obtained energy is higher than the
previous value, an additional random
number x is generated (between 0 and 1)
Procedure
Then, if
x e

Em
kT
the new geometry is rejected and the corresponding Rm is not
considered in the final average.
Procedure
Then, if
x e

Em
kT
the new geometry is rejected and the corresponding Rm is not
considered in the final average.
But, if
x e

En
kT
the new Rm is then accepted and the energy accounted for the
final average, even conducting to a higher point in the
hyperspace.
Procedure
Then, if
x e

Em
kT
the new geometry is rejected and the corresponding Rm is not
considered in the final average.
But, if
x e

En
kT
the new Rm is then accepted and the energy accounted for the
final average, even conducting to a higher point in the
hyperspace.
The process is then repeated from step 1 until a desired ending
Procedure
Experience indicates that the probability of acceptance of moves
must be adjusted to a 50 %. It is currently achieved with limits to
displacements of each particle.
Procedure
Experience indicates that the probability of acceptance of moves
must be adjusted to a 50 %. It is currently achieved with limits to
displacements of each particle.
This procedure is confusedly named in literature: It could appear
either as “simulated annealing” or as “Monte Carlo –
Metropolis” procedures, with small variants
Practical steps
Thermalization
The previously described procedure is performed until an
acceptable level of fluctuation of <E> around a determined
mean value, generally lower than the initial state.
Practical steps
Thermalization
The previously described procedure is performed until an
acceptable level of fluctuation of <E> around a determined
mean value, generally lower than the initial state.
Sampling
At the end of thermalization the process continues as described,
although properties are recorded at each valid step to be stored
in memory for further calculations, mostly energy and geometry
related values.
Such new accounting excludes geometries “visited” during
thermalization.
Computing properties
Average configurational energy
The average internal energy originated in the hypersurface
results during the sampling step is computed by:
1 M
E   E (R m )
M m1
where M is the number of valid counts during the sampling step.
Computing properties
Average configurational energy
The average internal energy originated in the hypersurface
results during the sampling step is computed by:
1 M
E   E (R m )
M m1
where M is the number of valid counts during the sampling step.
This kind of average can be used for any other kind of
recorded property.
Computing properties
It must be observed that the statistical weighting of this average
energy is always unitary. It means the same probabilities for each
point although annealing randomly allows uphill moves.
1 M
P (R m )  1  E   E (R m )
M m1
Computing properties
It must be observed that the statistical weighting of this average
energy is always unitary. It means the same probabilities for each
point although annealing randomly allows uphill moves.
1 M
P (R m )  1  E   E (R m )
M m1
This condition can be established only when the system reaches
a steady state after thermalization, although very fluctuating
systems require alternative weighting procedures.
Computing properties
Heat capacity at constant volume
Heat capacity is computed after statistical considerations for a
classical Boltzmann distribution:
3
CV (T )  [ E  E ] / kT  Nk
2
2
2
where N is the number of particles.
2
Computing properties
Radial distribution function
Molecular, atomic or element layering is given by radial
distribution functions g giving the probabililty to find an 
kind body at an r distance from other  kind body and it is
normalized to be 1 when r is large:
V
 (r )
g (r )  

2
N 4r r
where  (r) is the number of  kind bodies appearing netween ri
and ri+r, and r is the interval dividing the segment between
the  kind body by a certain given cutoff distance.
Computing properties
Radial distribution function are evaluated for each one of the
concerning bodies (atoms, elements, molecular fragments, etc.)
with respect to other in each sampling step and the final value is
averaged over all steps, in the same way as internal
configurational energy.
Computing properties
Radial distribution function are evaluated for each one of the
concerning bodies (atoms, elements, molecular fragments, etc.)
with respect to other in each sampling step and the final value is
averaged over all steps, in the same way as internal
configurational energy.
This property can be related with X ray or neutron dispersion
intensities in diffraction experiments.
Monte Carlo simulation of
aspartic acid in water
Radial distributions in liquid
water
PES
Parameters
Jorgensen, W. L.; Chandrasekhar, J.; Madura, J. D.; Impey,
R. W.; Klein, M. L., Comparison of simple potential
functions for simulating liquid water. The Journal of
Chemical Physics 1983, 79, (2), 926-935.
Other zero order treatments
Other heuristic methods
• Homology modeling
– Use geometry of similar molecules as a guess for
predicting geometries of very complex nanoscopic systems
Other heuristic methods
• Homology modeling
– Use geometry of similar molecules as a guess for
predicting geometries of very complex nanoscopic systems
• Fragment Approach
– Fix/Constrain part of the system while optimizing others by
appropriate methods
Other heuristic methods
• Homology modeling
– Use geometry of similar molecules as a guess for
predicting geometries of very complex nanoscopic systems
• Fragment Approach
– Fix/Constrain part of the system while optimizing others by
appropriate methods
• Rule-Based
– Use regularities in structural behaviors of molecules or
fragments to guess initial geometries. It is the case of
proteins fixing tertiary structure according to statistically
likelihood of amino acid sequence to adopt such a
structure
Simplex algorithm
If only the energy is known of a given system, the probably
simplest way to obtain an optimized structure is one called the
simplex algorithm. This is just a systematic way of trying larger
and smaller variables for the coordinates and keeping the
changes that result in a lower energy by an inter and
extrapolation procedure. Simplex optimizations are rarely used
because the huge computational requirements.