Transcript Document

Lecture 13
•Fluctuations.
•Fluctuations of macroscopic variables.
•Correlation functions.
•Response and Fluctuation.
•Density correlation function.
•Theory of random processes.
•Spectral analysis of fluctuations: the Wiener-Khintchine theorem.
•The Nyquist theorem.
•Applications of Nyquist theorem.
1
We considered the system in equilibrium, where we did different
statistical averages of the various physical quantities. Nevertheless,
there do occur deviations from, or fluctuations about these mean
values. Though they are generally small, a study of these fluctuations
is of great physical interest for several reasons.
1. It enables us to develop a mathematical scheme with the help of
which the magnitude of the relevant fluctuations, under a variety
of physical situations, can be estimated. We find that while in a
single-phase system the fluctuations are thermodynamically
negligible they can assume considerable importance in multi-phase
systems, especially in the neighborhood of the critical points. In
the latter case we obtain a rather high degree of spatial
correlation among the molecules of the system which in turn gives
rise to phenomena such as critical opalescence.
2
2. It provides a natural framework for understanding a class of
physical phenomena which come under the common heading of
“Brownian motion”; these phenomena relate properties such as the
mobility of a fluid system, its coefficient of diffusion, etc., with
temperature trough the so-called Einstein’s relations. The
mechanism of the Brownian motion is vital in formulating, and in a
certain sense solving, problems as to how “a given physical system,
which is not in a state of equilibrium, finally approaches a state of
equilibrium”, while “a physical system, which is already in a state of
equilibrium, persists to be in that state”.
3. The study of fluctuations, as a function of time, leads to the
concept of correlation functions which play an important role in
relating the dissipation properties of a system, such as the viscose
resistance of fluid or the electrical resistance of a conductor, with
the microscopic properties of the system in a state of the
equilibrium. This relationship (between irreversible processes on
one-hand and equilibrium properties on the other) manifests itself
in the so-called fluctuation-dissipation theorem.
3
At the same time, a study of the “frequency spectrum” of fluctuations,
which is related to the time-dependent correlation function through the
fundamental theorem of Wiener and Khinthchine, is of considerable
value in assessing the “noise” met with in electrical circuits as well as in
the transmission of electromagnetic signals.
Fluctuations
The deviation x of a quantity x from its average value
as
We note that
x
is defined
x  x  x
(13.1)
x  x  x  0
(13.2)
We look to the mean square deviation for the first rough measure of
the fluctuation:
(x) 2  ( x  x ) 2  x 2  2 xx  x 2  x 2  x 2
(13.3)
4
We usually work with the mean square deviation, although it is
sometimes necessary to consider also the mean fourth deviation. This
occurs, for example, in considering nuclear resonance line shape in
liquids. One refers to x n as the n-th moment of the distribution.
Consider the distribution g(x)dx which gives the number of systems in
dx at x. In principle the distribution g(x) can be determined from a
knowledge of all the moments, but in practice this connection is not
always of help. The theorem is usually proved; we take the Fourier
transform of the distribution:
1
u( t ) 
2

ixt
g
(
x
)
e
dx

(13.4)

Now it is obvious on differentiating u(t) that
n


d
n
n
x  2 i  n u(t ) 
 dt
 t 0
(13.5)
5
Thus if u(t) is an analytic function we know from the moments all the
information needed to obtain the Taylor series expansion of u(t) the
inverse Fourier transform of u(t) gives g(x) as required. However, the
higher moments are really needed to use this theorem, and they are
sometimes hard to calculate. The function u(t) is sometimes called the
characteristic function of the distribution.
Energy Fluctuations in a Canonical Ensemble
When a system is in thermal equilibrium with a reservoir the
temperature s of the system is defined to be equal to the
temperature r of the reservoir, and it has strictly no meaning to ask
questions about the temperature fluctuation. The energy of the
system will however, fluctuate as energy is exchanged with the
reservoir. For a canonical ensemble we have
E2 
where


2  E n /
 E n /
E
e
/
e

 n

=-1/. Now


2 E n
E n
E
e
/
e
(13.6)
 n

6
Z  e
En
(13.7)
so that
Further
E2 
E 
and
 2 Z /  2
(13.8)
Z
Z / 
(13.9)
Z
E 1  2 Z
1  Z 




 Z  2 Z 2   
2
(13.10)
thus
E
 E 2  E 2  (E ) 2

(13.11)
Now the heat capacity at constant values of the external parameters
is given by
7
thus
E
E d E 1
CV 


T
 dT
 kT 2
(13.12)
E  2  kT 2 CV
(13.13)
Here Cv refers to the heat capacity at the Actual volume of the system.
The fractional fluctuation in energy is defined by
F
 (E ) 2 


2 
 E 
 (E ) 2 
 E2  


kT 2CV / E 2
(13.14)
We note then that the act of defining the temperature of a system by
bringing it into contact with a heat reservoir leads to an uncertainty in
the value of the energy. A system in thermal equilibrium with a heat
reservoir does not have energy, which is precisely constant. Ordinary
thermodynamics is useful only so long as the fractional fluctuation in
energy is small.
8
For perfect gas for example we have
CV  Nk
E  NkT
thus
1
F
N
(13.15)
For N=1022, F10-11, which is negligibly small.
For solid at low temperatures. According to the Debye low the heat
capacity of a dielectric solid for T<<D is
also
CV  Nk (T /  D ) 3
(13.16)
E  NkT (T / D )3
(13.17)
so that
1/ 2
F 
3
1  1  D  

 

N N  T  
(13.18)
9
Suppose that T=10-2deg K; D=200 deg K; N1016 for a particle 0.01
cm on a side. Then
F0.03
(13.19)
which is not inappreciable. At very low temperatures thermodynamics
fails for a fine particle, in the sense that we cannot know E and T
simultaneously to reasonable accuracy. At 10-5 degree K the fractional
fluctuation in energy is of the order of unity for a dielectric particle of
the volume 1cm3
Concentration Fluctuations in a Grand Canonical Ensemble
We have the grand partition function
( N  E N ,i )/
Z  e
(13.20)
N ,i
from which we may calculate


 Z
N 

ln Z 


Z 
(13.21)
10
and
N 
2
N
 e
2
( N  E N ,i )/
N ,i
e
( N  E N ,i )/
 2  2Z

Z  2
(13.22)
N ,i
Thus
2
 1  2Z



1

Z
N
2
2
2
2
(N )  N  N   
  
2 
2 

Z    
 Z 
(13.23)
Perfect Classical Gas
From an earlier result
N e
 /
(V /  )
3
(13.24)
thus
N /   N / 
(13.25)
and using (13.23)
11
(N ) 2  N
(13.26)
The fractional fluctuation is given by

F  (N ) / N
2
2

1/ 2

1
(13.27)
N
Random Process
A stochastic or random variable quantity with a definite range of
values, each one of which, depending on chance, can be attained with
a definite probability. A stochastic variable is defined
1. if the set of possible values is given, and
2. if the probability attaining each value is also given.
Thus the number of points on a die that is tossed is a stochastic
variable with six values, each having the probability 1/6.
12
The sum of a large number of independent stochastic variables is itself
a stochastic variable. There exists a very important theorem known as
a central limit theorem, which says that under very general conditions
the distribution of the sum tends toward a normal (Gaussian)
distribution law as the number of terms is increased. The theorem may
be stated rigorously as follows:
Let x1, x2,…, xn be independent stochastic variables with their means
equal to 0, possessing absolute moments
2+(i) of the order 2+,where
 is some number >0. If denoting by Bn the mean square fluctuation of
the sum x1+ x2+…+ xn , the quotient
n
wn 

i 1
2 
(i )
1 (  / 2 )
n
(13.28)
B
tends to zero as n, the probability of the inequality
13
x1  x1 ... xn
Bn
t
(13.28)
tends uniformly to the limit
1
2
t
e
u 2 / 2
du

For a distribution f(xi), the absolute moment of order

 
(i )
x

(13.29)

i
f ( xi )dxi
 is defined as
(13.30)
Almost all the probability distributions f(x) of stochastic variables x of
interest to us in physical problems will satisfy the requirements of the
central limit theorem. Let us consider several examples.
14
Example 13a
The variable x distributes uniformly between 1. Then f(x)=1/2,
-1 x  1, and f(x)=0 otherwise. The absolute moment of order 3
exists:
1
3 
1
2

x dx  41
3
(13.32)
1
The mean square fluctuation is
(13.33)
(x)2  x 2  x 2
but x 2  0 . We have
1
x  2  x 2   x 2 dx 
1
3
(13.34)
0
If there are n independent variables xi it is easy to see that the mean
square fluctuation Bn of their sum (under the same distribution) is
15
Bn  n / 3
(13.35)
Thus (for =1) we have for (13.28) the result
n/4
wn 
n / 33/2
(13.36)
which does tend to zero as n. Therefore the central limit theorem
holds for this example.
Example 13b
The variable x is a normal variable with standard deviation  - that
means, that it is distributed according to the Gaussian distribution
2
1
 x 2/2
f ( x) 
e
 2
where  2 is the mean square deviation;  is called standard
deviation. The absolute moment of order 3 exists:
(13.37)
16
2
3 
 2

x e
3  x 2 / 2 2
0
4
dx 
3
2
(13.38)
The mean square fluctuation is
x
2
2
2
x 
 2

x e
2  x 2 / 2 2
dx   2
(13.39)
0
If there are n independent variables xi, then
Bn  n 2
For =1
wn 
(13.40)
4n 3 / 2
 n 
2 3/ 2
(13.41)
which approaches 0 as n approaches . Therefore the central limit
theorem applies to this example. A Gaussian random process is one
for which all the basic distribution functions f(xi) are Gaussian
distributions.
17
Example 13c
The variable x has a Lorentzian distribution:
1
f ( x) 
1 x2
(13.42)
The absolute moment of order  is proportional to


0
1
x
2 dx
1 x

(13.43)
But this integral does not converge for 1, and thus not for =2+,
>0. We see that central limit theorem does not apply to a Lorentzian
distribution.
18
Random Process or Stochastic Process
By a random process or stochastic process x(t) we mean a process in
which the variable x does not depend in a completely definite way on
the independent variable t, which may denote the time. In
observations on the different systems of a representative ensemble
we find different functions x(t). All we can do is to study certain
probability distributions - we cannot obtain the functions x(t)
themselves for the members of the ensemble. In Figure 13.1 one can
see a sketch of a possible x(t) for one system.
x
t
Figure 13.1 Sketch of a random process x(t)
19
The plot might, for example, be an oscillogram of the thermal noise
current x(t)I(t) obtained from the output of a filter when a thermal
noise voltage is applied to the input.
We can determine, for example
p1(x,t)dx =Probability of finding x in the range (x, x+dx)at time t; (13.44)
p2(x1,t1; x2,t2)dx1dx2 =Probability of finding x in (x1, x1+dx1) at time t1;
(13.45)
and in the range (x2, x2+dx2) at time t2
If we had an actual oscillogram record covering a long period of time
we might construct an ensemble by cutting the record up into strips of
equal length T and mounting them one over the other, as in Figure
13.2.
20
The probabilities p1 and p2 will be found from the ensemble.
Proceeding similarly we can form p3, p4,…. The whole set of
probability distributions
pn (n=1,2,…,) may be necessary to
describe the random process completely.
Figure 13.2 Recordings of
x
x(t) versus t for three
1
x(t)
T
2
x(t)
T
3
x(t)
T
system of an ensemble, as
simulated by taking three
intervals of duration T from
a single long recording.
Time averages are taken in
a horizontal direction in
such a display; ensemble
averages are taken in a
vertical direction.
21
In many important cases p2 contains all the information we need.
When this is true the random process is called a Markoff process. A
stationary random process is one for which the joint probability
distributions pn are invariant under a displacement of the origin of time.
We assume in all our further discussion that we are dealing with
stationary Markoff processes.
It is useful to introduce the conditional probability P2(x1,0x2,t)dx2 for
the probability that given x1 one finds x in dx2 at x2 a time t later.
Than it is obvious that
p2 ( x1,0; x2 , t )  p1 ( x1,0)P2 ( x1,0 x2 , t )
(13.46)
22
Wiener-Khintchine Theorem
The Wiener-Khintchine theorem states a relationship between two
important characteristics of a random process: the power spectrum
of the process and the correlation function of the process.
Suppose we develop one of the records in Fig.13.2 of x(t) for 0<t<T
in a Fourier series:

x(t )   an cos 2f n t  bn sin 2f n t 
(13.47)
n 1
where fn=n/T. We assume that <x(t)>=0, where the angular
parentheses <> denote time average; because the average is assumed
zero there is no constant term in the Fourier series.
The Fourier coefficients are highly variable from one record of duration
T to another. For many type of noise the an, bn have Gaussian
distributions. When this is true the process (13.47) is said to be a
Gaussian random process.
23
Let us now imagine that x(t) is an electric current flowing through unit
resistance. The instantaneous power dissipation is x2(t). Each Fourier
component will contribute to the total power dissipation. The power in
the n-th component is
Pn  an cos2f nt  bn sin 2f nt 
2
(13.48)
We do not consider cross products terms in the power of the form
an cos2f nt  bn sin 2f nt am cos2f mt  bm sin 2f mt 
(13.49)
because for nm the time average of such terms will be zero. The
time average of P is
 Pn  an2  bn2  / 2
(13.50)
because
cos 2 2f nt  12 ;
sin 2 2f nt  12 ;
cos 2f nt sin 2f nt  0. (13.51)
24
We now turn to ensemble averages, denoted here by a bar over the
quantity. As we mentioned above, every record in Fig.13.2 running in
time from 0 to T. We will consider that an ensemble average is an
average over a large set of independent records. From a random
process we will have
an  0;
bn  0;
a nbn  0
(13.52)
an bm  bn bm   n2 nm
(13.53)
where for a Gaussian random process
deviation, as in example 13b
f ( x) 
1
 2
e
n
is just the standard
 x 2 / 2 2
Thus
an cos 2f n t  bn sin 2f n t 2   n2 (cos 2 2f n t  sin 2 2f n t )   n2
(13.54)
25
Thus from (13.49) the ensemble average of the time average power
dissipation associated with n-th component of x(t) is
 Pn    n2
(13.55)
Power Spectrum
We define the power spectrum or spectral density G(f) of the random
process as the ensemble average of the time average of the power
dissipation in unit resistance per unit frequency bandwidth. If
equal to the separation between two adjacent frequencies
f n  f n 1  f n 
n 1 n 1
 
T
T T
fn
(13.56)
we have
G( f n )f n   Pn    n2
(13.57)
Now by (13.51), (13.52) and (13.53)
26
x 2 (t )    n2
(13.58)
n
Using (13.56)

x 2 (t )   G( f n )f n   G( f n )df
n
(13.59)
0
The integral of the power spectrum over all frequencies gives the
ensemble average total power.
Correlation Function
Let us consider now the correlation function
C( )  x(t ) x(t   )
(13.60)
where the average is over the time t. This is the autocorrelation
function. Without changing the result we may take an ensemble
average of the time average x(t ) x(t   )
so that
27
 a
n
C ( )  x(t ) x(t   ) 
cos2f nt  b sin 2f nt an cos2f n (t   )  b sin 2f n (t   ) 
n ,m
1
2
2
2
2
(
a

b
)
cos
2

f



 n n
 n cos2f n
n
n
n
(13.61)
Using (13.57)

C ( )   G( f ) cos2fdf
(13.62)
0
Thus the correlation function is the Fourier cosine transform of the
power spectrum.
Using the inverse Fourier transform we can write

G ( f )  4 C ( ) cos2fd
(13.63)
0
This, together with (13.62) is the Winer-Khitchine theorem. It has an
obvious physical content. The correlation function tells us essentially
how rapidly the random process is changing.
28
Example 13d.
C( )  e  /  c
If
(13.64)
we may say that c is a measure of the above time the system exists
without changing its state, as measured by x(t), by more than e-1. c in
this case have a meaning of correlation time. We then expect physically
that frequencies much higher than, 1/c will not be represented in an
important way in the power spectrum. Now if C() is given by (13.64),
the Wiener-Khintchine theorem tells us that

G ( f )  4 e
0
 /  c
4 c
cos 2fd 
1  (2f c ) 2
(13.65)
Thus, as shown in Fig. 13.3, the power spectrum is flat (on a log.
frequency scale) out to 2f1/c, and then decreases as 1/f2 at high
frequencies. Note that the noise spectrum for the correlation function
is “white” out of cutoff fc1/2c,
29
1
0.5
0
1
2
3
4
log102f
Figure 13.3 Plot of
spectral density versus
log102f for an
exponential function
with c=10-4 c.
The Nyquist Theorem
The Nyquist theorem is of great importance in experimental physics
and in electronics. The theorem gives a quantitative expression for
the thermal noise generated by a system in thermal equilibrium and is
therefore needed in any estimate of the limiting signal-to-noise ratio
of experimental set-ups. In the original form the Nyquist theorem
states that the mean square voltage across a resistor of resistance R
in thermal equilibrium at thermal T is given by
V  4RkTf
2
(13.66)
30
where f is the frequency band width which the voltage fluctuations
are measured; all Fourier components outside the given range are
ignored. Remember the definition of the spectral density G(f), we may
write Nyquist results as
(13.67)
G( f )  4 RkT
This is not strictly the power density, which would be G(f)/R.
Figure 13.4 The noise
generator produces a power
Noise
Filter
generator
spectrum G(f)=4RkT. If the
R
R’ filter passes unit frequency
range, the resistance R’ will
absorb power 2RkT. R’ is
matched to R.
The maximum thermal noise power per unit frequency range delivered
by a resistor to a matched load will be G(f)/4R=kT; factor of 4 enters
where it does because the power delivered to the load R’ is
I R'  V R' /(R  R' )
2
2
2
(13.68)
31
which at match (R’=R) is V 2 / 4R (Figure.13.4).
We will derive the Nyquist theorem in two ways: first, following the
original transmission line derivation, and, second, using a microscopic
argument.
Transmission line derivation
Zc=R
R
R
Figure 13.5 Transmission line of
length l with matched terminations.
l
Consider as in Figure 13.5 a loss less transmission line of length l and
characteristic impedance Zc=R terminated at each end by a resistance
R. The line is therefore matched at each end in the sense that all
energy traveling down the line will be absorbed without reflection in
the appropriate resistance.
32
The entire circuit is maintained at temperature T. In analogy to the
argument on the black-body radiation (Lecture 8) the transmission line
has two electromagnetic modes (one propagation in each direction) in
the frequency range
c'
f 
l
(13.69)
where c’ is the propagation velocity on the line. Each mode has energy

e
 / kT
1
(13.70)
in equilibrium. We are usually concerned here with the classical limit ,
so that the thermal energy on the line in the frequency range
kTlf
c'
f
(13.71)
The rate at which energy comes off the line in one direction is
33
kTf
(13.72)
Because the thermal impedance is matched to the line, the power
coming off the line at one end is absorbed in the terminal impedance
R at that end. The load emits energy at the same rate. The power
input to the load is
2
(13.73)
But V=I(2R), so that
I R  kTf
V 2 / R  4kTf
(13.74)
which is the Nyquist theorem.
Microscopic Derivation
We consider a resistance R with N electrons per unit volume; length l,
area A and carrier relaxation time c. We treat the electrons as
Maxwellian but it was shown that the noise voltage is independent of
such details, involving only the value of the resistance regardless of the
details of the mechanisms contributing to the resistance.
34
First note that
V  IR  RAj  RANeu
(13.75)
here V is the voltage, I the current, j the current density, and is the
average (or drift) velocity component of the electrons down the
u
resistor. Observing that NAl is the total number of electrons in the
specimen
NAlu  ui
(13.76)
Summed over all electrons. Thus
V  (Re/l )ui  Vi
(13.77)
where ui and Vi are the random variables. The spectral density G(f)
has the property that in the range f
Vi  G ( f ) f
2
(13.78)
35
We suppose that the correlation function may be written as
C ( )  Vi (t )Vi (t   )  Vi 2e  /  c
(13.79)
Then, from the Wiener-Khintchine theorem we have

G( f )  4(Re/ l ) 2 u 2  e  /  c cos 2fd  4(Re/ l ) 2 u 2
0
Usually in metals at room temperature
the microwave range
1
2
(13.80)
c<10-13 s, so from dc through
c<<1 and may be neglected. We recall that
mu 2  12 kT
(m- mass of electron,
So that
c
1  (2f c ) 2
u
(13.81)
average velocity of electron)
u2  kT / m
(13.82)
36
Thus in the frequency range
f
2
 kT  Re 
V 2  NAlVi 2  NAlG( f )f  NAl 4    c f
 m  l 
or
V  4RkTf
2
(13.83)
(13.84)
Here we have used the relation
  Ne  c / m
2
(13.85)
from the theory of conductivity and also elementary relation
R  l / A
(13.86)
 is the electrical conductivity.
37
The simplest way to establish (13.85) in a plausible way is to solve the
drift velocity equation
d
1
m  u  eE
 dt  c 
so that in the steady state ( or for
(13.87)
c<<1 ) we have
u  e c E / m
(13.88)
giving for the mobility (drift velocity per unit electric field)
  u / E  e c / m
(13.89)
Then we have for the electric conductivity
  j / E  Neu / E  Ne2 c / m
(13.90)
38