Introduction and review of Matlab

Download Report

Transcript Introduction and review of Matlab

Statistical Physics Notes
1. Probability
Discrete distributions
A variable x takes n discrete values, {xi , i = 1,…, n}
(e.g. throwing of a coin or a dice)
After N events, we get a distribution, {N1, N2, …, Nn}
Probability:
Ni
P( xi )  Lim
N  N
n
Normalization:
1 n
 P( xi )  N  N i  1
i 1
i 1
Markovian assumption: events are independent
Continuous distributions
A variable x can take any value in a continuous interval [a,b]
(most physical variables; position, velocity, temperature, etc)
Partition the interval into small bins of width dx.
If we measure dN(x) events in the interval [x,x+dx], the probability is
dN ( x)
N  N
P( x)dx  Lim
1 dN
N  N dx
P( x)  Lim
Normalization:
 P( x)dx  1
Examples: Uniform probability distribution
1 / a, 0  x  a
P( x)  
0,
0xa
P(x)
1/a
a
Gaussian distribution
P( x)  Ae
( x  x0 )2 2s 2
A: normalization constant
x0: position of the maximum
s: width of the distribution
e
 x 2 2s 2

1
2
 x  0.98s  s
2s
x
Normalization constant

 Ae
( x  x0 ) 2 2s 2
dx  1

Substitute
y
( x  x0 )
, dx  2 s dy
2s
A 2s

e
 y2
dy  1 

1
A
2 s
Normalized Gaussian distribution
1
( x  x0 ) 2
P( x) 
e
2 s
2s 2
Properties of distributions
Average (mean)
n
 xi P ( xi )
x   i 1

 xP ( x)dx
discrete distribution
continuous distribution
Median (or central) value: 50% split point
Most probable value: P(x) is maximum
For a symmetric distribution all of the above are equal (e.g. Gaussian).
Mean value of
an observable f(x)
i f ( xi ) P ( xi )

f 
 f ( x) P ( x)dx
discrete
continuous
Variance: measures the spread of a distribution
var( x)  x  x
2
 i xi  x
2 P( xi )
 i xi2 P( xi )  2 x
 x2  x
i xi P( xi )  x
2
i P( xi )
2
Root mean square (RMS) or standard deviation:
RMS dev  var( x )  x
Same dimension as mean, used in error analysis: x  x
Examples: Uniform probability distribution, [0,a]
1
P( x)  ,
a
a2
var( x) 
12
a
x  ,
2
 x 
a
2 3
Gaussian distribution
1
( x  x0 ) 2
P( x) 
e
2 s
For variance, assume
var( x)  x
var( x) 
2
2s

2  y2
y e

x  x0
,
x 0
1

2 s
2 
2s 2

x
2  x 2 2s 2
e
dx,
Let y  x

dy  s 2
 x  s
2s
Detour: Gaussian integrals

e
Fundamental integral:

Introduce
e
I (b) 
 x2

bx2
dx 

dx  

b

dI
 
2 bx2
  x e
dx  3 / 2
db 
2b
d 2I
db 2


4 bx
x e
2
dx 

3 
22 b5 / 2


2  x2
x e
dx 



4  x2
x e
dx 


2
1 3 
22

d nI
db n


2 n bx
x e

2
dx 
1  3(2n  1) 
2 n b ( 2 n 1) / 2


2n  x 2
x e

dx 
(2n  1)!! 
2n
Addition and multiplication rules
Addition rule for exclusive events:
P( xi or x j )  P( xi )  P( x j ) i  j
Multiplication rule for independent variables x and y
Pjoint ( xi , y j )  P1 ( xi ) P2 ( y j )
discrete
Pxy ( x, y )da  P( x)dx  P( y )dy
continuous
Examples: 2D-Gaussian distribution
Pxy ( x, y )da 
If s x  s y  s ,

1
2s xs y
1
2s
2
e
e
2
2
 x 2 2s x2  y 2s y
e
( x 2  y 2 ) 2s 2
dxdy
dxdy
In cylindrical coordinates, this distribution becomes
Pr (r , )da 
1
2s
2
e
 r 2 2s 2
rdrd
Since it is independent of , we can integrate it out
Pr (r )dr 
Pr (r ) 
Mean and variance
r   2s
r 2  2s
var( x)  (2   2)s
1
2s
r
s
2
2
e
e
 r 2 2s 2
 r 2 2s 2
2r dr
3D-Gaussian distribution with equal s’s
Pxyz ( x, y, z )d r 
1
3
(2 ) 3 / 2 s 3
e
( x 2  y 2  z 2 ) 2s 2
d 3r
In spherical coordinates
Pr (r , , )d r 
1
3
(2 )
3/ 2
s
3
e
 r 2 2s 2 2
r sin  dr d d
We can integrate out  and 
Pr (r )dr 
Pr (r ) 
1
(2 ) 3 / 2 s 3
2 r2
s
3
e
e
 r 2 2s 2
4r 2 dr
 r 2 2s 2
Here r refers to a vector physical variable, e.g. position, velocity, etc.
Mean and variance of 3D-Gaussian distribution
2 1
r 
2 1
r
s


3

2s
4
 ue
u
du 
0
3
u  r 2 2s 2

r
4  r 2 2s 2
e
8

s
(integrati on by parts gives 1)
substitute
dr
yr
2s
0
2 1
s
substitute
dr
0
2 1
s
3  r 2 2s 2
r e
 s3

2

3
2
5/ 2
s

5
4  y2
y e
dy  3s 2
(integral gives 3  8)
0
var( r )  3  8  s 2  0.45 s 2
 r  0.67 s
Most probable value for a 3D-Gaussian distribution
Set
dPr
2 r 2 r 2
 0 for Pr (r ) 
e
3
dr
s

2   2r   r 2
2r  r  2s  e
2s 2
2s 2
0
 r2 
2r 1 
0
 2s 
~
r 2  2s 2
 ~
r  2s
Summary of the properties of a 3D-Gaussian dist.:
~
r  2 s  1.4 s ,
r 
8

s  1.6 s ,
r 2  3 s  1.7 s
Binomial distribution
If the probability of throwing a head is p and tail is q (p+q=1), then the
probability of throwing n heads out of N trials is given by the binomial
distribution:
N!
P ( n) 
p n q N n
n!( N  n)!
The powers of p and q in the above equation are self-evident.
The prefactor can be found from combinatorics. An explicit construction
for 1D random walk or tossing of coins is shown in the next page.
Explicit construction of the binomial distribution
Trial
1
2
x
L
L
T
 2L 0 2L
3
 3L  L


N
T /H
TT
L 3L
 NL  ( N  1) L ( N  1) L
TTT
H
TH  HT
TTH
THH
HH
HHH

NL
TN
T N 1H TH N 1
HN
There is only 1 way to get all H or T,
N ways to get 1T and (N-1)H (or vice versa),
N(N-1)/2 ways to get 2T and (N-2)H,
N(N-1)(N-2)…(N-n+1)/n! ways to get nT and (N-n)H (binomial coeff.)
Properties of the binomial distribution:
Normalization follows from the binomial theorem
N
S  p, q   ( p  q ) N  
n 0

N!
p n q N n
n!( N  n)!
N
N
P
(
n
)

(
p

q
)
1

n 0
Average value of heads
N
N
n 0
n 0
n   nP(n)   n
N!
p n q N n
n!( N  n)!
S

p
 p ( p  q) N  pN ( p  q) N 1  pN
p
p
For
p  1/ 2, n  N / 2
Average position in 1D random walk after N steps
x  2n  N L  2 n  N L  (2 p  1) NL  ( p  q) NL
x  0, if
p  q  1/ 2
For large N, the probability of getting
x  0 is actually quite small.
To find the spread, calculate the variance
N
n2   n2
n 0
N!
p n q N n
n!( N  n)!
2
[
2

  
  
N
  p  S   p  ( p  q)  p
Np ( p  q ) N 1
p
 p 
 p 
[
 pN ( p  q ) N 1  p ( N  1)( p  q ) N  2
 pN 1  pN  p   pN ( pN  q )  n
]
2
 Npq
]
Hence the variance is
var( n)  n 2  n
var( n)  N / 4, if
To find the spread in position, we use
[
 [4 n
2
 Npq
p  q  1/ 2


x 2  4n 2  4 Nn  N 2 L2
]
]L
x 2  4 n 2  4 N n  N 2 L2
x
2
var( x)  x 2  x
2
2
 4N n  N 2
[
 4 n2  n
var( x)  x 2  NL2 , if
rms ( x)  N L
2
2
]L
2
 4 NpqL2
p  q  1/ 2
Large N limit of the binomial distribution:
Mean collision times of molecules in liquids are of the order of picosec.
Thus in macroscopic observations, N is a very large number
To find the limiting form of P(n), Taylor expand its log around the mean
ln P(n)  ln N! ln n! ln( N  n)! n ln p  ( N  n) ln q
2
d
1
d
 ln P(n )  (n  n ) ln P(n) n  (n  n ) 2 2 ln P(n) n  
dn
2
dn
Stirling’s formula for ln(n!) for large n,
ln n! n ln n  n  12 ln( 2n)
d
ln n! ln n  1  1  1 / 2n  ln n
dn
ln P(n)  ln N ! ln n! ln( N  n)! n ln p  ( N  n) ln q
d
ln P(n) n   ln n  ln( N  n )  ln p  ln q
dn
 ln
(N  n) p
(n  n p)
n (1  p )
 ln
 ln
 ln 1  0
nq
nq
nq
1
1
1  1 1
pq
1
ln
P
(
n
)












n
n N n
N  p q
Npq
Npq
dn 2
d2
Substitute the derivatives in the expansion of P(n)
1
1
ln P(n)  ln P(n )  (n  n ) 2

2
Npq
P ( n)
(n  n ) 2
ln

P(n )
2 Npq
Thus the large N limit of the binomial distribution is the Gaussian dist.
P ( n)  P ( n ) e
Here n  Np
( nn ) 2 2 Npq
is the mean value, and the width and normalization are
s  Npq ,
1
1

2 s
2Npq
P(n ) 
For the position variable x  (2n  N ) L, we have
s x  Npq 2 L
x  x  ( n  n ) 2 L,
P( x) 
1
2 s x
e
( x  x ) 2 2s x2
x  (2n  N ) L  (2 p  1) NL  ( p  q ) NL
How good is the Gaussian approximation?
N=4
N=14
N!
Bars: Binomial distribution with p=q
Pj 
Solid lines: Gaussian distribution
1  x2
P( x) 
e
2
1
 ! ! 2
N j
2
N j
2
2
N
2. Thermal motion
Ideal gas law:
PV  NkT
Macroscopic observables:
P: pressure, V: volume, T: temperature
N: number of molecules
k = 1.38 x 10-23 J/K (Boltzmann constant)
At room temperature (Tr = 298 K), kTr = 4.1 x 10-21 J = 4.1 pN nm
(kTr provides a convenient energy scale for biomolecular system)
The combination NkT suggests that the kinetic energy of individual
molecules is about kT. To link the macroscopic properties to molecular
ones, we need an estimate of pressure at the molecular level.
Derivation of the average kinetic energy from the ideal gas law
Consider a cubic box of length L filled with N gas molecules
The pressure on the walls arises from the collision of molecules
Momentum transfer to the y-z wall
q  mv x  (mv x )  2mv x
Average collision time
t  2 L v x
Force on the wall due a single coll.
q 2mv x mv x2
fx 


t 2 L v x
L
In general, velocities have a distribution, so we take an average
m 2

vx
L
Average force due to one molec.
fx
Average force due to N molec’s
Fx  N f x 
Nm 2
vx
L
Pressure on the wall
Fx
Nm 2
N
P

v x  m v x2
A
AL
V
Generalise to all walls
PV
 m v x2  m v 2y  m v z2  kT
N
Average kinetic energy
1
3
2
K  m v  kT
2
2
Equipartition thm.: Mean energy associated with each deg. of fredom is:
1 kT
2
Distribution of speeds
Experimental
set up
Velocity filter
Experimental results for Tl atoms
○ T=944 K
● T=870 K
▬▬ 3D-Gaussian dist.
v is reduced by v~  2kT / m
Velocities in a gas have a Gaussian distribution (Maxwell)
1
vx 2
P (v x ) 
e
2 s
2s 2
v x2  s 2 , since m v x2  kT
The rms is
Distribution of speeds (3D-Gaussian)
Pv (v)dv 

1
2 s 
3
e
2m
Pv (v) 
 
  kT 
v 2 2s 2
3/ 2
4 v 2 dv
2  mv 2 2 k T
v e
This is the probability of a molecule
having speed v regardless of direction
 s2 
kT
m
Example: the most common gas molecule N2
kT
4.1  10  21
s

 300 m/s

26
m
4.7  10
v~  2 s  420 m/s
v 
8

s  470 m/s
v 2  3 s  510 m/s
Oxygen is 16/14 times heavier, so for the O2 molecule, scale the above
results by
7 / 8  0.94
Hydrogen is 14 times lighter, so for H2 scale the above results by 14  3.7
Generalise the Maxwell distribution to N molecules
(Use the multiplication rule, assuming they move independently)
P( v1 , v 2 ,, v N )  e
 mv12 2 k T  mv 22 2 k T
e
 m ( v12  v 22 ,, v 2N ) 2 k T
e
 e  Ekin
e
 mv 2N 2 k T
kT
This is simply the Boltzmann distribution for non-interacting N particles.
In general, the particles interact so there is also position dependence:
P (r1 , v1 , r2 , v 2 ,, rN , v N )  e  Etot / kT
Etot  Ekin  Epot  E (r1 , v1 , r2 , v 2 ,, rN , v N )
Universality of the Gaussian dist. arises from the quadratic nature of E.
Activation barriers and relaxation to equilibrium
Speed dist. for boiling water at 2 different temperatures
Removing the most energetic molecules creates a non-equilibrium state.
When boiled, water molecules with sufficient kinetic energy evaporate
Arrhenius rate law:
e  Ebarrier / kT
Ebarrier : Activation barrier
Those molecules with K.E. > Ebarrier can escape
Equilibrium state (i.e. Gaussian dist.) is restored via molecular collisions
Injecting very fast molecules
in a box of molecules results
in an initial spike in the
Gaussian distribution
Gas molecules collide like billiard balls
(Energy and momentum are conserved)
Thus in each collision, the fast molecules
lose energy to the slower ones (friction)
3. Entropy, Temperature and Free Energy
Entropy is a measure of disorder in a closed system.
When a system goes from an ordered to a disordered state, entropy
increases and information is lost. The two quantities are intimately
linked, and sometimes it is easier to understand information loss or gain.
Consider any of the following 2-state system
•
Tossing of N coins
•
Random walk in 1-D (N steps)
•
Box of N gas molecules divided into 2 parts
•
N spin ½ particles with magnetic moment m in a magnetic field B
Each of these systems can be described by a binomial distribution
P( N1 , N 2 ) 
N!
p1N1 p2N 2 ,
N1! N 2 !
p1  p2  1,
N1  N 2  N
There are 2N states in total, but only N+1 are distinct.
Introduce the number of states with a given (N1, N2) as
( N1 , N 2 ) 
N!
N1! N 2 !
We define the disorder (or information content) as
I  K ln ,
K  1 ln 2  1.44
(Shannon’s formula)
For the 2-state system, assuming large N, we obtain
I  K ln N ! ln N1! ln N 2 !
 K  N ln N  N  N1 ln N1  N1  N 2 ln N 2  N 2 
  K [N1 ln ( N1 N )  N 2 ln ( N 2 N )]
I N   K [( N1 N ) ln ( N1 N )  ( N 2 N ) ln ( N 2 N )]
Thus the amount of disorder per event is
I N   K  p1 ln p1  p2 ln p2 
I vanishes for either p1=1 or p2=1 (zero disorder, max info) and
It is maximum for p1=p2=1/2 (max disorder, min info)
Generalization to m-levels: i pi  1, i Ni  N
P ( N1 , N 2 , N m ) 
N!
p1N1 p2N 2  pmN m
N1! N 2 ! N m !
( N1 , N 2 , N m ) 
[
I  K ln ln N !i ln N i
]
N!
N1! N 2 ! N m !
 I N  K 
m
p
i 1 i
ln pi
Max disorder when all pi are equal, and zero disorder when one is 1.
Entropy
Statistical postulate: An isolated system evolves to thermal equilibrium.
Equilibrium is attained when the probability dist. of microstates has the
maximum disorder (i.e. entropy).
Entropy of a physical system is defined as
S  k ln ( E, N ,)
Entropy of an ideal gas
N
Total energy:
1 2
1 N 2
1 N 3 2
E   mvi 
pi 
pik


2m i 1
2m i 1 k 1
i 1 2

2
2mE   pik 
 i 1 k 1 
N
3
1/ 2
Radius of the sphere in
3N dimensions
Area of such a sphere is proportional to r3N-1 ≈ r3N
Hence the area of the 3N-D volume in momentum space is (2mE)3N/2
The number of allowed states is given by the phase space integral
   d 3r1  d 3rN  d 3 p1  d 3 p N
 V N (2mE ) 3 N / 2
[
S  k ln CV N (2mE ) 3 N / 2

]

 Nk ln VE 3 / 2  const .
 2 3 N / 2 
1


C 
 2 N !h 3 N
(
3
N
/
2

1
)!


Area of a unit
sphere in 3N-D
Sakure-Tetrode formula
Planck’s const.
Temperature:
Isolated system
Total energy is conserved
E  E A  EB
If the energies are not equal, and we allow exchange of energy via
a small membrane, how will the energy evolve? (maximum disorder)
[
]
S ( E A )  k N A 32 ln E A  ln V A   N B 32 ln( E  E A )  ln VB 
dS
0 
dE A
N A NB

0 
E A EB
E A EB 3

 kT
N A NB 2
Example: NA = NB= 10
Fluctuations in energy are proportional to
s
E

N
1

N
N
Definition of temperature
dS 3  N A N B  1
1
 k 

 

0
dE A 2  E A E B  TA TB
At equilibrium: TA  TB
In general 1  dS
T dE
(zeroth law of thermodynamics)
 dS 
or T   
 dE 
1
Free energy of a microscopic system “a” in a thermal bath
Fa  Ea  TS a   kT ln Z
Ea   Pi Ei
Average energy
i
Z   e  Ei
i
kT
Partition function
Example: 1D harmonic oscillator in a heat bath
Ea ( x, v)  12 mv 2  12 Kx 2
Px ( x) 
Pv (v) 
1
2 s x
1
2 s v
e
e
 x 2 2s x2
, s x  kT K
v 2 2s v2
, s v  kT m
Ea   dxdv P( x, v)Ea ( x, v)
 12 ms v2  12 Ks x2  12 kT  12 kT  kT
In 3D:
Ea 
1 mv 2
2
 12 Kr 2  32 kT  32 kT  3kT
Equipartition of energy: each DOF has kT/2 of energy on average
Free energy of the harmonic oscillator
Ea ( x, p) 
1
Z   dxdp e  Ea ( x, p )
h
1
kT
 2s xs p 
h

1 2 1
p  m 2 x 2
2m
2
kT
1
 x2
  dx e
h
s
x
 dp e
 p 2 2s 2p
 kT m 2 , s p  kTm
Free energy: Fa   kT ln Z   kT ln
Entropy:
2s x2
kT

1
kT 

S a   Ea  Fa   k 1  ln

T




Sa  
Fa
 
kT 
kT 


kT
ln

k
1

ln




T T 
 





Harmonic oscillator in quantum mechanics
En  n  1 2
Energy levels

Z   e n 1 2 
kT
n 0
e
 2 k T

x
n
,
xe
1
, x 
1 x
n 0


n 0

e 

 k T
2k T
1  e  
kT
Free energy: Fa  kT ln Z  1   kT ln 1  e  kT
2
Entropy:
n
  e  kT
Fa

Sa  
 k

ln
1

e
 kT
T
kT
1

e


kT




To calculate the average energy let   1 kT
Ea
1
1 
 Ei
 i Ei e

Z
Z 
i e
 Ei
1 

Z
Z 
1  e  2  1  e 





Z  1  e
2 1  e 
Using <Ea> yields the same entropy expression.
Classical limit: kT  ,
  1
Ea  kT

Fa  12   kT ln 1  e 
kT
  kT ln kT
  e  k T

Sa  k 

ln
1

e
 k T
 kT 1  e

kT


kT 


k
1

ln







