Lecture 1 () - Strongly Correlated Systems
Download
Report
Transcript Lecture 1 () - Strongly Correlated Systems
Numerical approaches to the correlated electron problem:
Quantum Monte Carlo.
F.F. Assaad.
The Monte Carlo method. Basic.
Spin Systems. World-lines, loops and stochastic series expansions.
The auxiliary field method I
The auxiliary filed method II
Ground state, finite temperature and Hirsch-Fye.
Special topics (Kondo / Metal-Insulator transition) and outlooks.
21.10.2002
MPI-Stuttgart.
Universität-Stuttgart.
Some Generalities.
Problem:
Question:
~1023 electrons per cm3.
Ground state, elementary excitations.
Fermi statistics. No correlations.
Fermi-sea. Elementary excitations: particle-holes. CPU time N3
Correlations (Coulomb).
Low energy elementary
excitations remain
particle and holes.
Fermi liquid theory.
Screening, phase space.
1D: Luttinger liquid. (Spinon, Holon)
2D: Fractional Quantum Hall effect.
Magnetism.
Mott insulators.
Metal-insulator transition.
Heavy fermions.
Complexity of problem scales as eN
Lattice Hamiltonian H.
bH
Z Tr e
,
b 1/ T
Trace over Fock space.
Path integral. Not unique.
World-line approach
with loop updates.
Stochastic series expansion
O(Nb)
method.
Non-frustrated spin
systems .
Bosonic systems.
1-D Hubbard and
t-J models.
Non-interacting
electrons in dimensions
larger than unity.
O e
VbΔ
Sign Problem.
Approximate
strategies:
CPQMC, PIRG
Determinantal method.
O(N3b) method.
Any mean-field Hamiltonian.
Models with particle-hole
symmetry.
Half filled Hubbard.
Kondo lattices.
Models with attractive
No.
interactions
Attractive Hubbard model
Holstein model.
Impurity problems.
The Monte Carlo Method. Basic ideas.
Aim:
Let
O
P
dx P( x) O( x),
R
d
dx P( x) 1 and P( x) 0 x
and split the domain in hyper-cubes of linear size h and use
an integration method where the systematic error scales as hk
The systematic error in terms of the
N = V/hd of is then proportional to:
h
k
N
number of function evaluations
k / d
Thus poor results for large values of d and the Monte Carlo method becomes
attractive.
The central limit theorem.
Let
O
xi
P
i:1
N
Be a set of statistically independent points distributed according to
the probability distribution P(x). Then we can estimate
dx P( x) O( x)
Distribution of X.
1
O( x i ) X
N i
D( X )
What is the error?
X O
1 1
exp
2
2
2
For practical purposes we estimate:
2
11
NN
p
2
with 2
1
i O( xi ) N
2
Demonstration of the theorem.
p
O
2
p
O
(
x
)
i
i
Thus the error (i.e. the width of the Gaussian distribution) scales as
of the dimensionality of the integration space.
1
2
O
N
2
1
N
irrespective
y
An Example: Calculation of
1
1
4 dx dy 1 x 2 y 2
0
1
x
0
P( x, y ) 1 and O( x, y ) 1 x y 2
In this case,
2
Draw N {(x,y)} random points. x, y are drawn from uniform distribution in the interval [0:1]
1
X
N
1 x
N
i 1
y 2i
X 3.14 and 0.0185
D(X)
Take N=8000 to obtain
2
i
Repeat this simulation many time to compute D(X)
Markov Chains: Generating points according to a distribution P(x).
Pt x
Define a Monte Carlo time dependent probability distribution:
which evolves
according to a Markov process: the future depends only on the present. The time evolution is
given by:
P t 1 y
T
y,x
Pt x
Pt y P( y)
Requirement:
x
Conditions on T:
T
x, y
T
x
x, y
1, T x , y 0 x, y
y
x, y
n | T n x , y 0
P( x)
T
x, y
Ergodicity.
P( y ) Stationarity.
y
Stationarity condition is fulfilled if detailed balance condition is satisfied:
T x , y P( y ) T y , x P( x) since
T
x
x, y
P( y ) T y , xP( x)
1
But stationarity condition is essential not detailed balance!
x
Convergence to P(x).
Rules.
T
x, y
T
x
|| P t 1 P ||
| P
t 1
( x) P( x) |
x
| T
x
x, y
P t ( y)
y
T
x, y
P ( y) |
x, y
1, T x , y 0 x, y
y
x, y
n | T n x , y 0
P( x)
T
x, y
Ergodicity.
P( y ) Stationarity.
y
y
T | P ( y ) P( y ) |
x, y
x
t
y
| P ( y ) P( y ) |
t
|| P t P ||
y
Rate of convergence.
Eigenvalues, l, of T satisfy l<1, l1 corresponds to the stationary
distribution. The rate of convergence will depend on the second largest eigenvalue l1.
Let
P t 0 P P1
t
t
P t P T ( P t 0 P ) l 1 P1 exp[t ln(l 1)]P1 exp[t / ] P1
with = 1/ ln(l 1)
Explicit construction of T.
(1)
T
x, y
T
x
0
T y,x
a y,x
Probability of proposing a move from x to y.
Has to satisfy the ergodicity condition (2) and (1).
Probability of accepting the move.
0
T y , x a y , x if x y
T y , x 0 1
if x y
T z,x
a
z,x
zx
n | T n x , y 0
P( x)
T
x, y
Note: T x , x 0
0
so that
T satisfies (1)
0
y
T
a
y
,
x
0
x , yP ( )
0
0
T y , x a y , x P( x) T x , y a x , y P( y )
a x, y
T y , xP( x)
0
T x , yP ( y )
a x , y F (1/ Z )
Ansatz:
a y , x F ( Z ) with Z 0
P
(
x
)
T y,x
F ( Z ) min( Z ,1) or F ( Z )
Metropolis
Z
1 Z
Heatbath
Ergodicity.
P( y ) Stationarity.
y
To satisfy (3) we will require detailed balance:
F (Z )
Z
F (1/ Z )
1, T x , y 0 x, y
y
(2) x, y
(3)
x, y
(1)
Ergodicity.
T
x, y
T
x
(2) x, y
To achieve ergodicity, one will often want to
combine different types on moves.
(3)
x, y
1, T x , y 0 x, y
y
n | T n x , y 0
P( x)
T
x, y
P( y ) Stationarity.
y
Let
(i )
T , i :1
N
satisfy (1) and (3).
We can combine those moves randomly:
R
T l (i ) T ,
(i )
i
l
(i )
1
i
or sequentially
S
(i )
T T
i
to achieve ergodicity.
Note: If T(i), :1...N, satisfies the detailed balance condition then
TR satisfies the detailed balance condition but
TS satisfies only the stationarity condition.
Ergodicity.
Autocorrelation time and error analysis: Binning analysis.
Monte Carlo simulation: 1) Start with configuration x0
2) Propose a move from x0 to y according to
and accept it with probability
ay x
,
0
Tyx
,
0
0
y if the move is accepted
x
3) 1
x 0 otherwise
4) Goto 1)
Generate a sequence:
x x
0
so that.
O
Autocorrelation time:
P
which if N is large enough will be distributed according to P(x)
N
1
N
O( x )
s
s
C O (t )
1
N
O( x )O( x
s
s t
s
1
N
)
1
O( x s )
N s
1
s O( x s) N O( x s)
s
2
Relevant time scale to forgett memory of intial configuration is
2
2
e
t / O
0 and N>> 0
To use the central limit theorem to evaluate the error, we need statistically independent
measurements.
Binning.
Group the raw data into bins of size
n 0
~
1
On (t )
n0
and estimate the error with.
2n
1 1
M M
2
~
On(s)
s
1
M
2
~
On(s)
s
, M N / n o
2
If n is large enough (n~5-10) the error will be independent on n.
n0
O( x
s 1
( t 1) n0
s
)
Example. The one dimensional Ising model.
H J
i 1
i
i 1 h
i 1
L
with L 1 1
i
and i 1
L
We want to compute spin-spin correlation functions:
g (r )
exp b H
i
ir
exp b H
P
Algorithm.
Choose a site randomly.
Propose a spin flip.
Accept with Metropolis or Heat-bath.
Carry out the measurement e.g. after a sweep.
Example of error analysis L=24 1D Ising model:
O g ( L / 2)
bJ
g(L/2) exact
g(L/2) MC
1
0.0760
0.076 +/- .0018
2
0.9106
0.909 +/- .0025
Results obtained after 2X106 sweeps
Unit is a single sweep
Unit is the autocorrelation time as determined from (a)
Random number generators. Linear congruential
I
j 1
aI j (mod m)
xj I
j 1
a 7 , m 231 1
5
Period: 231, 32 bit integer.
/ m [0,1[
(Ref: Numerical recipes. Cambridge University Press)
Deterministic (i.e. pseudo ramdom).
For a given initial value of I the sequence of random numbers is reprodicible.
Quality checks.
(1). Ditribution:
X
1
(2) Correlations:
C (t )
1
N
x x
s
s t
s
1
N
x
s
2
s
1
xs
N s
1
xs
N s
2
2
0
(3)
2-tupels.
1
1
( xi, xi 1)
0
1
0
0.0001
The generation of good pseudo random numbers is a quite delicate issue which
requires some care and extensive quality check. It is therefore highly recommended
not to invent ones secret recursion rules but to use one of the well-known generators
which have been tested by many other workers in the field.