Sect. 1.5: Probability Distribution for Large N

Download Report

Transcript Sect. 1.5: Probability Distribution for Large N

Sect. 1.5: Probability Distributions for
Large N: (Continuous Distributions)
For the 1 Dimensional Random Walk Problem
We’ve found: The Probability Distribution is Binomial:
WN(n1) = [N!/(n1!n2!)]pn1qn2
Mean number of steps to the right: <n1> = Np
Dispersion in n1: <(Δn1)2> = Npq
N = 20
p=q=½
Relative Width:
(Δ*n1)/<n1> = (q½)(pN)½
for N increasing, the mean
value increases  N, & the
relative width decreases  (N)-½
q=1–p
n2 = N - n 1
• Imagine N getting larger & larger. Based on what we
just said, the relative width of WN(n1) gets smaller &
smaller & the mean value <n1> gets larger & larger.
• If N is VERY, VERY large, we can treat W(n1) as a
continuous function of a continuous variable n1.
• For N large, it’s convenient to look at the natural log
ln[W(n1)] of W(n1), rather than the function itself.
• Do a Taylor’s series expansion of ln[W(n1)]
about value of n1 where W(n1) has a maximum.
• Detailed math (in text) shows that this value of n1
is it’s average value <n1> = Np.
• It also shows that the width is equal to the value
of the width <(Δn1)2> = Npq.
N VERY, VERY Large!!
• Taylor’s series expansion of ln[W(n1)] about n1 for which
W(n1) = its max. Math (see book) shows that
W(n1) = max at n1= <n1> = Np.
It’s width = <(Δn1)2> = Npq.
• For ln[W(n1)], use Stirling’s Approximation (Appendix A-6)
for logs of large factorials.
Stirling’s Approximation
• If N is a large integer, the natural log of it’s factorial is
approximately:
ln[N!] ≈ N[ln(N) – 1] + (½)ln(2N)
(1)
• But, if N is VERY, VERY Large, N >> ln(N), so neglect
the last term in (1). So, in our case, we will use
ln[N!] ≈ N[ln(N) – 1]
• In this large N, large n1 limit, the Binomial Distribution
W(n1) becomes (shown in detail in the text):
W(n1) = Ŵexp[-(n1 - <n1>)2/(2<(Δn1)2>)]
Here,
Ŵ = [2π <(Δn1)2>]-½
• This is called the Gaussian Distribution or the Normal
Distribution. We’ve found that <n1> = Np, <(Δn1)2> = Npq.
• The reasoning which led to this for large N & continuous n1
limit started with the Binomial Distribution. But this is a
very general result. Starting with ANY discrete probability
distribution & taking the limit of LARGE N, will result in the
Gaussian or Normal Distribution. This is called
The Central Limit Theorem
or The Law of Large Numbers.
One of the most important results of
probability theory is
The Central Limit Theorem:
• The distribution of any random
phenomenon tends to be Gaussian or
Normal if we average it over a large
number of independent repetitions.
• This theorem allows us to analyze and
predict the results of chance phenomena
when we average over many observations.
Related to the Central Limit Theorem is
The Law of Large Numbers:
• If a random phenomenon is repeated a large number of times,
The proportion of trials on which each
outcome occurs gets closer and closer
to the probability of that outcome,
and
The mean of the observed values
gets closer & closer to
The mean  of a Gaussian Distribution
which describes the data.
Sect. 1.6: The Gaussian Probability Distribution
• In the limit of a large number of steps in the random
walk, N (>>1), the Binomial Distribution becomes a
Gaussian Distribution:
W(n1) = [2π<(Δn1)2>]-½exp[-(n1 - <n1>)2/(2<(Δn1)2>)]
<n1> = Np, <(Δn1)2> = Npq
• Recall that n1 = ½(N + m), where the displacement
x = mℓ & that <m> = N(p – q). We can use this to
convert to the probability distribution for
displacement m, in the large N limit (after algebra):
P(m) = [2π2]-½exp[-(m -)2/(22)]
 = <m> = N(p – q), 2 = <(Δm)2> = 4Npq
P(m) = [2π2]-½exp[-(m -)2/(22)]
We can express this in terms of
x = mℓ. As N  >> 1, x can be
treated as continuous. In this
case, |P(m+2) – P(m)| << P(m)
& discrete values of P(m) get
closer & closer together.
• Now, ask: What is the probability that, after
N steps, the particle is in the range x to x + dx?
• Let the probability distribution for this ≡ P(x).
• Then, we have: P(x)dx = (½)P(m)(dx/ℓ).
• The range dx contains (½)(dx/ℓ) possible values
of m, since the smallest possible dx is dx = 2ℓ.
• After some math, we obtain the standard form of the
Gaussian (Normal) Distribution
P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2]
μ ≡ N(p – q)ℓ ≡ mean value of x
σ ≡ 2ℓ(Npq)-½ ≡ width of the distribution
NOTE: The generality
of the arguments
we’ve used is such that a
Gaussian Distribution
occurs in the limit of
large numbers for all
discrete distributions!
P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2]
μ ≡ N(p – q)ℓ σ ≡ 2ℓ(Npq)-½
• Note: To deal with Gaussian distributions,
you need to get used to doing integrals with
them! Many are tabulated!!
• Is P(x) properly normalized? That is, does
P(x)dx = 1? (limits -  < x < )
P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2]dx
= (2π)-½σ-1exp[-y2/2σ2]dy (y = x – μ)
= (2π)-½σ-1 [(2π)½σ] (from a table)
P(x)dx = 1
P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2]
μ ≡ N(p – q)ℓ
σ ≡ 2ℓ(Npq)-½
• Compute the mean value of x (<x>):
<x> = xP(x)dx =
(limits -  < x < )
xP(x)dx = (2π)-½σ-1xexp[-(x – μ)2/2σ2]dx
= (2π)-½σ-(y + μ)exp[-y2/2σ2]dy (y = x – μ)
= (2π)-½σ-1yexp[-y2/2σ2]dy + μ exp[-y2/2σ2]dy
yexp[-y2/2σ2]dy = 0 (odd function times even function)
exp[-y2/2σ2]dy = [(2π)½σ]
(from a table)
<x> = μ ≡ N(p – q)ℓ
P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2]
μ ≡ N(p – q)ℓ
σ ≡ 2ℓ(Npq)-½
• Compute the dispersion in x (<(Δx)2>)
<(Δx)2> = <(x – μ)2> = (x – μ)2P(x)dx (limits -  < x < )
<(Δx)2> = xP(x)dx = (2π)-½σ-1xexp[-(x – μ)2/2σ2]dx
= (2π)-½σ-1y2exp[-y2/2σ2]dy
= (2π)-½σ-1(½)(π)½σ(2σ2)1.5
<(Δx)2> =
(y = x – μ)
(from a table)
σ2 = 4Npqℓ2
0.00
0.05
0.10
fx
0.15
0.20
0.25
Comparison of Binomial &
Gaussian Distributions
0
2
6
4
8
10
x
Dots = Binomial. Curve = Gaussian.
With the with same mean μ & the same width σ
Comparison of Binomial &
Gaussian Distributions
Similar information
as on previous slide.
Blue Histogram = Binomial. Curve = Gaussian.
With the with same mean μ & the same width σ
Some Well-known & Potentially
Useful Properties of Gaussians
Gaussian
Width = 2σ
P(x) =
2σ
Areas Under Portions of a Gaussian Distribution
Two Graphs with the
Same Information
in Different Forms
Areas Under Portions of a Gaussian Distribution
Again, Two Forms of
the Same Information
Sect. 1.7: Probability Distributions Involving
Several Variables: Discrete or Continuous
• Consider a statistical description of a
situation with more than one discrete
random variable:
Example, 2 variables, u, v
•The possible values of u are: u1,u2,u3,…uM
•The possible values of v are: v1,v2,v3,…vM
P(ui,vj) ≡ Probability that u = ui, & v = vj
SIMULTANEOUSLY
• We must have:
∑i = 1 M ∑j = 1 N P(ui,vj) = 1
P(ui,vj) ≡ Probability that u = ui, & v = vj
SIMULTANEOUSLY
∑i = 1 M ∑j = 1 N P(ui,vj) = 1
• Let Pu(ui) ≡ Probability that u = ui independent
of the value v = vj
So, Pu(ui) ≡ ∑j = 1 N P(ui,vj)
• Similarly, let Pv(vj) ≡ Probability that
v = vj independent of value u = ui
So, Pv(vj) ≡ ∑i = 1 M P(ui,vj)
• Of course, it must also be true that
∑i = 1 M Pu(ui) = 1 & ∑j = 1 N Pv(vj) = 1
In the special case that u & v are
Statistically Independent
or Uncorrelated:
Then & only then can we write:
P(ui,vj) ≡ Pu(ui)Pv(vj)
A General Discussion of Mean Values
• If F(u,v) = any function of u,v, it’s mean value is:
<F(u,v)> ≡ ∑i = 1 M ∑j = 1 N P(ui,vj)F(ui,vj)
• If F(u,v) & G(u,v) are any 2 functions of u, v, we
can easily show:
<F(u,v) + G(u,v)> = <F(u,v)> + <G(u,v)>
• If f(u) is any function of u & g(v) is any function
of v, we can easily show:
<f(u)g(v)> ≠ <f(u)><g(v)>
• The only case when the inequality becomes an
equality is if u & v are statistically independent.
Sect. 1.8: Comments on Continuous
Probability Distributions
• Everything we’ve discussed for discrete distributions
generalizes to continuous distributions in obvious ways.
• Let u ≡ a continuous random variable in the range:
a1 ≤ u ≤ a2
• The probability of finding u in the range u to u + du
≡ P(u) ≡ P(u)du
P(u) ≡ Probability Density
of the distribution function
• Normalization: P(u)du = 1 (limits a1 ≤ u ≤ a2)
• Mean values: <F(u)> ≡ F(u)P(u)du.
• Consider two continuous random variables:
u ≡ continuous random variable in range: a1 ≤ u ≤ a2
v ≡ continuous random variable in range: b1 ≤ v ≤ b2
• The probability of finding u in the range u to
u + du AND v in the range v to v + dv is
P(u,v) ≡ P(u,v)dudv
P(u,v) ≡ Probability Density function
• Normalization:
P(u,v)dudv = 1
(limits a1 ≤ u ≤ a2, b1 ≤ v ≤ b2)
• Mean values:
<G(u,v)> ≡ G(u,v)P(u,v)dudv
Functions of Random Variables
An important, often occurring problem is:
• Consider a random variable u.
• Suppose φ(u) ≡ any continuous function of u.
Question
• If P(u)du ≡ Probability of finding u in the range
u to u + du, what is the probability W(φ)dφ of
finding φ in the range φ to φ + dφ?
• Answer using essentially the “Chain Rule” of
differentiation, but take the absolute value to make sure
that probability W ≥ 0:
W(φ)dφ ≡ P(u)|du/dφ|dφ
Caution!!
φ(u) may not be a single valued function of u!
Example
Reif’s book, page 31.
Vector B of constant length
& random direction . All
 are equally likely or
equally probable
• Equally Likely  The probability of finding θ
between θ & θ + dθ is:
P(θ)dθ ≡ (dθ/2π)
Question
• What is the probability W(Bx)dBx that the x component
of B lies between Bx & Bx + dBx?
• Clearly, we must have –B ≤ Bx ≤ B. Also, each value
of dBx corresponds to 2 possible values of dθ. Also,
dBx = |Bsinθ|dθ
• So, we have:
W(Bx)dBx = 2P(θ)|dθ/dBx|dBx = (π)-1dBx/|Bsinθ|
Note also that: |sinθ| = [1 – cos2θ]½ = [1 – (Bx)2/B2]½ so finally,
W(Bx)dBx = (π)-1dBx[1 – (Bx)2/B2]-½, –B ≤ Bx ≤ B
= 0,
otherwise
• W not only has a maximum
at Bx = B, it diverges there!
It has a minimum at Bx = 0.
So, it looks like   
a
• W diverges at Bx = B, but it can be shown that it’s
integral is finite. So that W(Bx) is a properly normalized
probability: W(Bx)dBx= 1 (limits: –B ≤ Bx ≤ B)