Mean or Average Values

Download Report

Transcript Mean or Average Values

Quantum One: Lecture 24
1
2
Consequences of the
The Measurement Postulate
3
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
4
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
5
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
6
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
7
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
8
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
9
In the last lecture, we stated the 3rd postulate in a form that takes into account
the possible degeneracy of the eigenvalues of an observable being measured.
In any measurement of an observable 𝐴, the only values that can be obtained are
the eigenvalues of that observable. The 3rd postulate gives an expression for the
probability, or probability density to obtain an eigenvalue in the discrete or
continuous part of the spectrum. Depending on whether the index labeling the
degeneracy is discrete or continuous, these can take the following forms:
10
The second part of the third postulate addresses what happens to the state of the
system during a measurement of 𝐴 that yields one of the eigenvalues of that
observable.
In an ideal measurement, the state non-deterministically collapses, or is
projected onto the eigenspace π‘†π‘Ž of the observable associated with that
eigenvalue.
11
Let’s look at an example of an application of the first part of the measurement
postulate.
Consider a measurement of just the π‘₯ coordinate of a particle moving in three
dimensions.
The position eigenstates
form a basis of eigenstates of the
operator 𝑋, but are not completely labeled by the associated (continuous)
eigenvalue π‘₯.
Indeed, there are a continuous infinity of states having the same value of π‘₯, but
different values of 𝑦 and 𝑧.
Thus, in this example the continuous index 𝜏 is actually a vector in 𝑅² denoting
the coordinates (𝑦, 𝑧) in the y𝑧 plane.
So in general the degeneracy index 𝜏 can actually have multiple components,
perhaps arising from eigenvalues of other observables that commute with the
one being measured.
12
Let’s look at an example of an application of the first part of the measurement
postulate.
Consider a measurement of just the π‘₯ coordinate of a particle moving in three
dimensions.
The position eigenstates
form a basis of eigenstates of the
operator 𝑋, but are not completely labeled by the associated (continuous)
eigenvalue π‘₯.
Indeed, there are a continuous infinity of states having the same value of π‘₯, but
different values of 𝑦 and 𝑧.
Thus, in this example the continuous index 𝜏 is actually a vector in 𝑅² denoting
the coordinates (𝑦, 𝑧) in the y𝑧 plane.
So in general the degeneracy index 𝜏 can actually have multiple components,
perhaps arising from eigenvalues of other observables that commute with the
one being measured.
13
Let’s look at an example of an application of the first part of the measurement
postulate.
Consider a measurement of just the π‘₯ coordinate of a particle moving in three
dimensions.
The position eigenstates
form a basis of eigenstates of the
operator 𝑋, but are not completely labeled by the associated (continuous)
eigenvalue π‘₯.
Indeed, there are a continuous infinity of states having the same value of π‘₯, but
different values of 𝑦 and 𝑧.
Thus, in this example the continuous index 𝜏 is actually a vector in 𝑅² denoting
the coordinates (𝑦, 𝑧) in the y𝑧 plane.
So in general the degeneracy index 𝜏 can actually have multiple components,
perhaps arising from eigenvalues of other observables that commute with the
one being measured.
14
Let’s look at an example of an application of the first part of the measurement
postulate.
Consider a measurement of just the π‘₯ coordinate of a particle moving in three
dimensions.
The position eigenstates
form a basis of eigenstates of the
operator 𝑋, but are not completely labeled by the associated (continuous)
eigenvalue π‘₯.
Indeed, there are a continuous infinity of states having the same value of π‘₯, but
different values of 𝑦 and 𝑧.
Thus, in this example the continuous index 𝜏 is actually a vector in 𝑅² denoting
the coordinates (𝑦, 𝑧) in the y𝑧 plane.
So in general the degeneracy index 𝜏 can actually have multiple components,
perhaps arising from eigenvalues of other observables that commute with the
one being measured.
15
Let’s look at an example of an application of the first part of the measurement
postulate.
Consider a measurement of just the π‘₯ coordinate of a particle moving in three
dimensions.
The position eigenstates
form a basis of eigenstates of the
operator 𝑋, but are not completely labeled by the associated (continuous)
eigenvalue π‘₯.
Indeed, there are a continuous infinity of states having the same value of π‘₯, but
different values of 𝑦 and 𝑧.
Thus, in this example the continuous index 𝜏 is actually a vector in 𝑅² denoting
the coordinates (𝑦, 𝑧) in the 𝑦𝑧 plane.
So in general the degeneracy index 𝜏 can actually have multiple components,
perhaps arising from eigenvalues of other observables that commute with the
one being measured.
16
Let’s look at an example of an application of the first part of the measurement
postulate.
Consider a measurement of just the π‘₯ coordinate of a particle moving in three
dimensions.
The position eigenstates
form a basis of eigenstates of the
operator 𝑋, but are not completely labeled by the associated (continuous)
eigenvalue π‘₯.
Indeed, there are a continuous infinity of states having the same value of π‘₯, but
different values of 𝑦 and 𝑧.
Thus, in this example the continuous index 𝜏 is actually a vector in 𝑅² denoting
the coordinates (𝑦, 𝑧) in the y𝑧 plane.
So in general the degeneracy index 𝜏 can actually have multiple components,
perhaps arising from eigenvalues of other observables that commute with the
one being measured.
17
At any rate, in this example, the projector density associated with measurements
of the coordinate π‘₯ is
The associated probability density that a measurement of 𝑋 will yield the value π‘₯
is
18
At any rate, in this example, the projector density associated with measurements
of the coordinate π‘₯ is
The associated probability density that a measurement of 𝑋 will yield the value π‘₯
is the expectation value of this which we can write
19
At any rate, in this example, the projector density associated with measurements
of the coordinate π‘₯ is
The associated probability density that a measurement of 𝑋 will yield the value π‘₯
is the expectation value of this which we can write
20
At any rate, in this example, the projector density associated with measurements
of the coordinate π‘₯ is
The associated probability density that a measurement of 𝑋 will yield the value π‘₯
is the expectation value of this which we can write
21
At any rate, in this example, the projector density associated with measurements
of the coordinate π‘₯ is
The associated probability density that a measurement of 𝑋 will yield the value π‘₯
is the expectation value of this which we can write
22
At any rate, in this example, the projector density associated with measurements
of the coordinate π‘₯ is
The associated probability density that a measurement of 𝑋 will yield the value π‘₯
is the expectation value of this which we can write
23
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
24
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
25
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
26
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
27
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
28
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
29
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
30
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, A is an observable, so it has an ONB of eigenstates.
This means that
31
We now consider some consequences of the measurement postulate.
First, we note that the probability that one actually obtains some value is
guaranteed through the mathematical structure of the theory.
In other words, 𝐴 is an observable, so it has an ONB of eigenstates.
This means that
32
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
33
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
34
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
35
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
36
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
37
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
38
Mean or Average Values - Given that the predictions of the postulates are
statistical in nature, there are a number of statistical properties of the
measurement process that can be useful to evaluate, and which can be obtained
without actually solving the eigenvalue problem.
Consider, e.g., an ensemble of 𝑁 identically prepared systems (with 𝑁 ≫ 1), all in
the same quantum mechanical state |πœ“βŒͺ.
If the same observable 𝐴 is measured on each member of this ensemble, the
results will be a collection of values
(all being eigenvalues of 𝐴). Suppose that in this set, the eigenvalue
π‘Ž occurs π‘π‘Ž times
π‘Žβ€™ occurs π‘π‘Ž β€² times
π‘Žβ€™β€™ occurs π‘π‘Ž β€²β€² times, and so on,
39
Mean or Average Values Then the mean or arithmetic average of the values obtained will be
Thus, it is just the first moment of this discrete probability distribution.
This mean value, it should be emphasized, may not actually coincide with any of
the measurements actually performed, but it does give some statistical
information about the underlying probability distribution.
40
Mean or Average Values Then the mean or arithmetic average of the values obtained will be
Thus, it is just the first moment of this discrete probability distribution.
This mean value, it should be emphasized, may not actually coincide with any of
the measurements actually performed, but it does give some statistical
information about the underlying probability distribution.
41
Mean or Average Values Then the mean or arithmetic average of the values obtained will be
Thus, it is just the first moment of this discrete probability distribution.
This mean value, it should be emphasized, may not actually coincide with any of
the measurements actually performed, but it does give some statistical
information about the underlying probability distribution.
42
Mean or Average Values Then the mean or arithmetic average of the values obtained will be
Thus, it is just the first moment of this discrete probability distribution.
This mean value, it should be emphasized, may not actually coincide with any of
the measurements actually performed, but it does give some statistical
information about the underlying probability distribution.
43
Mean or Average Values Then the mean or arithmetic average of the values obtained will be
Thus, it is just the first moment of this discrete probability distribution.
This mean value, it should be emphasized, may not actually coincide with any of
the measurements actually performed, but it does give some statistical
information about the underlying probability distribution.
44
Mean or Average Values Then the mean or arithmetic average of the values obtained will be
Thus, it is just the first moment of this discrete probability distribution.
This mean value, it should be emphasized, may not actually coincide with any of
the measurements actually performed, but it does give some statistical
information about the underlying probability distribution.
45
Mean or Average Values To express the mean value in terms of the state
We thus have the simple relation, that . . .
, we proceed as follows:
46
Mean or Average Values To express the mean value in terms of the state
We thus have the simple relation, that . . .
, we proceed as follows:
47
Mean or Average Values To express the mean value in terms of the state
We thus have the simple relation, that . . .
, we proceed as follows:
48
Mean or Average Values To express the mean value in terms of the state
We thus have the simple relation, that . . .
, we proceed as follows:
49
Mean or Average Values To express the mean value in terms of the state
We thus have the simple relation, that . . .
, we proceed as follows:
50
Mean or Average Values To express the mean value in terms of the state
We thus have the simple relation, that . . .
, we proceed as follows:
51
Mean or Average Values
mean value = expectation value
Note: To compute the mean value, we do not have to solve the eigenvalue
equation.
We can compute it in any representation that we happen to have available.
52
Mean or Average Values
mean value = expectation value
Note: To compute the mean value, we do not have to solve the eigenvalue
equation.
We can compute it in any representation that we happen to have available.
53
Mean or Average Values
mean value = expectation value
Note: To compute the mean value, we do not have to solve the eigenvalue
equation.
We can compute it in any representation that we happen to have available.
54
Mean or Average Values
mean value = expectation value
Note: To compute the mean value, we do not have to solve the eigenvalue
equation.
We can compute it in any representation that we happen to have available.
55
Mean or Average Values
Thus, for any ONB of states {|nβŒͺ}, if we know the expansion coefficients
ψ_{n}=〈n|ψβŒͺ for the state
and the matrix elements A_{nnβ€²}=〈n|A|nβ€²βŒͺ for the observable
we can compute the mean value as
56
Mean or Average Values
Thus, for any ONB of states {|nβŒͺ}, if we know the expansion coefficients
ψ_{n}=〈n|ψβŒͺ for the state
and the matrix elements A_{nnβ€²}=〈n|A|nβ€²βŒͺ for the observable
we can compute the mean value as
57
Mean or Average Values
Thus, for any ONB of states {|nβŒͺ}, if we know the expansion coefficients
ψ_{n}=〈n|ψβŒͺ for the state
and the matrix elements A_{nnβ€²}=〈n|A|nβ€²βŒͺ for the observable
we can compute the mean value as
58
Mean or Average Values
Thus, if we have the column vector, row vector, and matrix associated with the
ket, the bra, and the operator, we can perform the matrix operation
59
Mean or Average Values
Thus, if we have the column vector, row vector, and matrix associated with the
ket, the bra, and the operator, we can perform the matrix operation
60
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
61
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
62
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
63
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
64
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
65
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
66
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
67
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
68
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
69
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
70
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
71
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
72
Mean or Average Values
So let's work through how to obtain some of the more common expectation
values associated with a single particle. In the position representation, we have
73
Mean or Average Values
And in the wavevector or momentum representation, we have
74
Mean or Average Values
And in the wavevector or momentum representation, we have
75
Mean or Average Values
And in the wavevector or momentum representation, we have
76
Mean or Average Values
And in the wavevector or momentum representation, we have
77
Mean or Average Values
And in the wavevector or momentum representation, we have
78
Mean or Average Values
And in the wavevector or momentum representation, we have
79
Mean or Average Values
And in the wavevector or momentum representation, we have
80
Mean or Average Values
And in the wavevector or momentum representation, we have
81
Mean or Average Values
And in the wavevector or momentum representation, we have
82
Mean or Average Values
And in the wavevector or momentum representation, we have
83
Statistical Uncertainty
The mean value of an observable tells us roughly where we can expect the
majority of the values obtained in an ensemble of measurements to be clustered.
It tells us nothing about how big of a region around the mean value we might
expect to obtain these values.
It is useful to have a measure of this statistical spread, which reflects the intrinsic
quantum mechanical uncertainty associated with the measurement process.
One useful measure of this dispersion is the root-mean-square deviation
of a series of measurements on the same state from their average value
84
Statistical Uncertainty
The mean value of an observable tells us roughly where we can expect the
majority of the values obtained in an ensemble of measurements to be clustered.
It tells us nothing about how big of a region around the mean value we might
expect to obtain these values.
It is useful to have a measure of this statistical spread, which reflects the intrinsic
quantum mechanical uncertainty associated with the measurement process.
One useful measure of this dispersion is the root-mean-square deviation
of a series of measurements on the same state from their average value
85
Statistical Uncertainty
The mean value of an observable tells us roughly where we can expect the
majority of the values obtained in an ensemble of measurements to be clustered.
It tells us nothing about how big of a region around the mean value we might
expect to obtain these values.
It is useful to have a measure of this statistical spread, which reflects the intrinsic
quantum mechanical uncertainty associated with the measurement process.
One useful measure of this dispersion is the root-mean-square deviation
of a series of measurements on the same state from their average value
86
Statistical Uncertainty
The mean value of an observable tells us roughly where we can expect the
majority of the values obtained in an ensemble of measurements to be clustered.
It tells us nothing about how big of a region around the mean value we might
expect to obtain these values.
It is useful to have a measure of this statistical spread, which reflects the intrinsic
quantum mechanical uncertainty associated with the measurement process.
One useful measure of this dispersion is the root-mean-square deviation
of a series of measurements on the same state from their average value
87
Statistical Uncertainty
We can write this in an equivalent and sometimes more useful form by expanding
the quadratic
〈(𝐴 βˆ’ 〈𝐴βŒͺ)²βŒͺ = 〈𝐴² βˆ’ 2𝐴〈𝐴βŒͺ + 〈𝐴βŒͺ²βŒͺ
Since 〈𝐴βŒͺ is a constant, this reduces to
〈(𝐴 βˆ’ 〈𝐴βŒͺ)²βŒͺ = 〈𝐴²βŒͺ βˆ’ 2〈𝐴βŒͺ² + 〈𝐴βŒͺ² = 〈𝐴²βŒͺ βˆ’ 〈𝐴βŒͺ²
so that
88
Statistical Uncertainty
We can write this in an equivalent and sometimes more useful form by expanding
the quadratic
〈(𝐴 βˆ’ 〈𝐴βŒͺ)²βŒͺ = 〈𝐴² βˆ’ 2𝐴〈𝐴βŒͺ + 〈𝐴βŒͺ²βŒͺ
Since 〈𝐴βŒͺ is a constant, this reduces to
〈(𝐴 βˆ’ 〈𝐴βŒͺ)²βŒͺ = 〈𝐴²βŒͺ βˆ’ 2〈𝐴βŒͺ² + 〈𝐴βŒͺ² = 〈𝐴²βŒͺ βˆ’ 〈𝐴βŒͺ²
so that
89
Statistical Uncertainty
We can write this in an equivalent and sometimes more useful form by expanding
the quadratic
〈(𝐴 βˆ’ 〈𝐴βŒͺ)²βŒͺ = 〈𝐴² βˆ’ 2𝐴〈𝐴βŒͺ + 〈𝐴βŒͺ²βŒͺ
Since 〈𝐴βŒͺ is a constant, this reduces to
〈(𝐴 βˆ’ 〈𝐴βŒͺ)²βŒͺ = 〈𝐴²βŒͺ βˆ’ 2〈𝐴βŒͺ² + 〈𝐴βŒͺ² = 〈𝐴²βŒͺ βˆ’ 〈𝐴βŒͺ²
so that
90
Statistical Uncertainty
Note that if the system is in a normalized eigenstate |π‘ŽβŒͺ of 𝐴 then
〈𝐴ⁿβŒͺ = βŒ©π‘Ž|𝐴ⁿ|π‘ŽβŒͺ = π‘ŽβΏβŒ©π‘Ž|π‘ŽβŒͺ = π‘ŽβΏ,
in which case
so there is no uncertainty when the system is in an eigenstate of the operator, as
we have repeatedly asserted.
Thus, in a statistical sense, the uncertainty in an observable associated with a
given quantum state |πœ“βŒͺ is a measure of the extent to which the system in this
state can be said to actually possess a value of the associated observable.
91
Statistical Uncertainty
Note that if the system is in a normalized eigenstate |π‘ŽβŒͺ of 𝐴 then
〈𝐴ⁿβŒͺ = βŒ©π‘Ž|𝐴ⁿ|π‘ŽβŒͺ = π‘ŽβΏβŒ©π‘Ž|π‘ŽβŒͺ = π‘ŽβΏ,
in which case
so there is no uncertainty when the system is in an eigenstate of the operator, as
we have repeatedly asserted.
Thus, in a statistical sense, the uncertainty in an observable associated with a
given quantum state |πœ“βŒͺ is a measure of the extent to which the system in this
state can be said to actually possess a value of the associated observable.
92
Statistical Uncertainty
Note that if the system is in a normalized eigenstate |π‘ŽβŒͺ of 𝐴 then
〈𝐴ⁿβŒͺ = βŒ©π‘Ž|𝐴ⁿ|π‘ŽβŒͺ = π‘ŽβΏβŒ©π‘Ž|π‘ŽβŒͺ = π‘ŽβΏ,
in which case
so there is no uncertainty when the system is in an eigenstate of the operator, as
we have repeatedly asserted.
Thus, in a statistical sense, the uncertainty in an observable associated with a
given quantum state |πœ“βŒͺ is a measure of the extent to which the system in this
state can be said to actually possess a value of the associated observable.
93
Statistical Uncertainty
Note that if the system is in a normalized eigenstate |π‘ŽβŒͺ of 𝐴 then
〈𝐴ⁿβŒͺ = βŒ©π‘Ž|𝐴ⁿ|π‘ŽβŒͺ = π‘ŽβΏβŒ©π‘Ž|π‘ŽβŒͺ = π‘ŽβΏ,
in which case
so there is no uncertainty when the system is in an eigenstate of the operator, as
we have repeatedly asserted.
Thus, in a statistical sense, the uncertainty in an observable associated with a
given quantum state |πœ“βŒͺ is a measure of the extent to which the system in this
state can be said to actually possess a value of the associated observable.
94
Joint or Simultaneous Uncertainty
It is interesting to ask about the possibility of simultaneously reducing the
uncertainty associated with two different observables 𝐴 and 𝐡, say.
We know, for example, that if 𝐡 is an observable which commutes with 𝐴, then it
is possible to find simultaneous eigenstates |π‘Ž, 𝑏βŒͺ of both observables.
From above, for such a state the uncertainty in both observables will vanish.
If, however, 𝐡 is an operator which does not commute with 𝐴, then there need
be no simultaneous eigenstates (although a few may exist, there will not generally
exist a common basis of eigenstates).
Thus, under these circumstances, it is not always possible to reduce the
simultaneous statistical uncertainty associated with the measurement of both
observables on a given quantum state.
95
Joint or Simultaneous Uncertainty
It is interesting to ask about the possibility of simultaneously reducing the
uncertainty associated with two different observables 𝐴 and 𝐡, say.
We know, for example, that if 𝐡 is an observable which commutes with 𝐴, then it
is possible to find simultaneous eigenstates |π‘Ž, 𝑏βŒͺ of both observables.
From above, for such a state the uncertainty in both observables will vanish.
If, however, 𝐡 is an operator which does not commute with 𝐴, then there need
be no simultaneous eigenstates (although a few may exist, there will not generally
exist a common basis of eigenstates).
Thus, under these circumstances, it is not always possible to reduce the
simultaneous statistical uncertainty associated with the measurement of both
observables on a given quantum state.
96
Joint or Simultaneous Uncertainty
It is interesting to ask about the possibility of simultaneously reducing the
uncertainty associated with two different observables 𝐴 and 𝐡, say.
We know, for example, that if 𝐡 is an observable which commutes with 𝐴, then it
is possible to find simultaneous eigenstates |π‘Ž, 𝑏βŒͺ of both observables.
From above, for such a state the uncertainty in both observables will vanish.
If, however, 𝐡 is an operator which does not commute with 𝐴, then there need
be no simultaneous eigenstates (although a few may exist, there will not generally
exist a common basis of eigenstates).
Thus, under these circumstances, it is not always possible to reduce the
simultaneous statistical uncertainty associated with the measurement of both
observables on a given quantum state.
97
Joint or Simultaneous Uncertainty
It is interesting to ask about the possibility of simultaneously reducing the
uncertainty associated with two different observables 𝐴 and 𝐡, say.
We know, for example, that if 𝐡 is an observable which commutes with 𝐴, then it
is possible to find simultaneous eigenstates |π‘Ž, 𝑏βŒͺ of both observables.
From above, for such a state the uncertainty in both observables will vanish.
If, however, 𝐡 is an operator which does not commute with 𝐴, then there need
be no simultaneous eigenstates (although a few may exist, there will not generally
exist a common basis of eigenstates).
Thus, under these circumstances, it is not always possible to reduce the
simultaneous statistical uncertainty associated with the measurement of both
observables on a given quantum state.
98
Joint or Simultaneous Uncertainty
It is interesting to ask about the possibility of simultaneously reducing the
uncertainty associated with two different observables 𝐴 and 𝐡, say.
We know, for example, that if 𝐡 is an observable which commutes with 𝐴, then it
is possible to find simultaneous eigenstates |π‘Ž, 𝑏βŒͺ of both observables.
From above, for such a state the uncertainty in both observables will vanish.
If, however, 𝐡 is an operator which does not commute with 𝐴, then there need
be no simultaneous eigenstates (although a few may exist, there will not generally
exist a common basis of eigenstates).
Thus, under these circumstances, it is not always possible to reduce the
simultaneous statistical uncertainty associated with the measurement of both
observables on a given quantum state.
99
The Uncertainty Principle
There turns out to be a precise statement which can be made about the so-called
uncertainty product
computed for any normalized state of the system.
This product is clearly a measure of the joint uncertainty associated with these
two observables.
The Uncertainty Theorem: In any quantum state |πœ“βŒͺ, the joint uncertainty in the
values of two observables 𝐴 and 𝐡 as measured through the uncertainty product
Δ𝐴Δ𝐡 is bounded from below by the relation
all expectation values taken with respect to the same quantum state |πœ“βŒͺ.
100
The Uncertainty Principle
There turns out to be a precise statement which can be made about the so-called
uncertainty product
computed for any normalized state of the system.
This product is clearly a measure of the joint uncertainty associated with these
two observables.
The Uncertainty Theorem: In any quantum state |πœ“βŒͺ, the joint uncertainty in the
values of two observables 𝐴 and 𝐡 as measured through the uncertainty product
Δ𝐴Δ𝐡 is bounded from below by the relation
all expectation values taken with respect to the same quantum state |πœ“βŒͺ.
101
The Uncertainty Principle
There turns out to be a precise statement which can be made about the so-called
uncertainty product
computed for any normalized state of the system.
This product is clearly a measure of the joint uncertainty associated with these
two observables.
The Uncertainty Theorem: In any quantum state |πœ“βŒͺ, the joint uncertainty in the
values of two observables 𝐴 and 𝐡 as measured through the uncertainty product
Δ𝐴Δ𝐡 is bounded from below by the relation
all expectation values taken with respect to the same quantum state |πœ“βŒͺ.
102
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
103
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
104
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
105
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
106
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
107
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
108
The Uncertainty Principle
To prove the uncertainty theorem we introduce shifted operators
which are just like the originals, except that they have zero mean value with
respect to the state |πœ“βŒͺ, i.e.,
These operators obey the following relationships, as is readily verified:
i.e.,
109
The Uncertainty Principle
In addition, we readily verify that
so if we prove the uncertainty relation for the shifted operators 𝐴 and 𝐡 we also
prove it for the unshifted operators 𝐴 and 𝐡.
We now set
and apply Schwarz's inequality
〈π‘₯|π‘₯βŒͺβŒ©π‘¦|𝑦βŒͺ β‰₯ 〈π‘₯|𝑦βŒͺβŒ©π‘¦|π‘₯βŒͺ = |〈π‘₯|𝑦βŒͺ|²
which implies that
110
The Uncertainty Principle
In addition, we readily verify that
so if we prove the uncertainty relation for the shifted operators 𝐴 and 𝐡 we also
prove it for the unshifted operators 𝐴 and 𝐡.
We now set
and apply Schwarz's inequality
〈π‘₯|π‘₯βŒͺβŒ©π‘¦|𝑦βŒͺ β‰₯ 〈π‘₯|𝑦βŒͺβŒ©π‘¦|π‘₯βŒͺ = |〈π‘₯|𝑦βŒͺ|²
which implies that
111
The Uncertainty Principle
In addition, we readily verify that
so if we prove the uncertainty relation for the shifted operators 𝐴 and 𝐡 we also
prove it for the unshifted operators 𝐴 and 𝐡.
We now set
and apply Schwarz's inequality
〈π‘₯|π‘₯βŒͺβŒ©π‘¦|𝑦βŒͺ β‰₯ 〈π‘₯|𝑦βŒͺβŒ©π‘¦|π‘₯βŒͺ = |〈π‘₯|𝑦βŒͺ|²
which implies that
112
The Uncertainty Principle
In addition, we readily verify that
so if we prove the uncertainty relation for the shifted operators 𝐴 and 𝐡 we also
prove it for the unshifted operators 𝐴 and 𝐡.
We now set
and apply Schwarz's inequality
〈π‘₯|π‘₯βŒͺβŒ©π‘¦|𝑦βŒͺ β‰₯ 〈π‘₯|𝑦βŒͺβŒ©π‘¦|π‘₯βŒͺ = |〈π‘₯|𝑦βŒͺ|²
which implies that
113
The Uncertainty Principle
In addition, we readily verify that
so if we prove the uncertainty relation for the shifted operators 𝐴 and 𝐡 we also
prove it for the unshifted operators 𝐴 and 𝐡.
We now set
and apply Schwarz's inequality
〈π‘₯|π‘₯βŒͺβŒ©π‘¦|𝑦βŒͺ β‰₯ 〈π‘₯|𝑦βŒͺβŒ©π‘¦|π‘₯βŒͺ = |〈π‘₯|𝑦βŒͺ|²
which implies that
114
The Uncertainty Principle
In addition, we readily verify that
so if we prove the uncertainty relation for the shifted operators 𝐴 and 𝐡 we also
prove it for the unshifted operators 𝐴 and 𝐡.
We now set
and apply Schwarz's inequality
〈π‘₯|π‘₯βŒͺβŒ©π‘¦|𝑦βŒͺ β‰₯ 〈π‘₯|𝑦βŒͺβŒ©π‘¦|π‘₯βŒͺ = |〈π‘₯|𝑦βŒͺ|²
which implies that
115
The Uncertainty Principle
This is already a useful inequality, but to put it in the standard form, we can
observe that the quantity on the right is the squared modulus of the complex
number
and so is larger in magnitude than the square of just its
imaginary part.
The latter we can obtain by taking one-half the difference of this number with its
complex conjugate, so we can write
Combining this with the result above and taking the square root we obtain
which implies
116
The Uncertainty Principle
This is already a useful inequality, but to put it in the standard form, we can
observe that the quantity on the right is the squared modulus of the complex
number
and so is larger in magnitude than the square of just its
imaginary part.
The latter we can obtain by taking one-half the difference of this number with its
complex conjugate, so we can write
Combining this with the result above and taking the square root we obtain
which implies
117
The Uncertainty Principle
This is already a useful inequality, but to put it in the standard form, we can
observe that the quantity on the right is the squared modulus of the complex
number
and so is larger in magnitude than the square of just its
imaginary part.
The latter we can obtain by taking one-half the difference of this number with its
complex conjugate, so we can write
Combining this with the result above and taking the square root we obtain
which implies
118
The Uncertainty Principle
This is already a useful inequality, but to put it in the standard form, we can
observe that the quantity on the right is the squared modulus of the complex
number
and so is larger in magnitude than the square of just its
imaginary part.
The latter we can obtain by taking one-half the difference of this number with its
complex conjugate, so we can write
Combining this with the result above and taking the square root we obtain
which implies
119
The Uncertainty Principle
This is already a useful inequality, but to put it in the standard form, we can
observe that the quantity on the right is the squared modulus of the complex
number
and so is larger in magnitude than the square of just its
imaginary part.
The latter we can obtain by taking one-half the difference of this number with its
complex conjugate, so we can write
Combining this with the result above and taking the square root we obtain
which implies
120
The Uncertainty Principle
This is already a useful inequality, but to put it in the standard form, we can
observe that the quantity on the right is the squared modulus of the complex
number
and so is larger in magnitude than the square of just its
imaginary part.
The latter we can obtain by taking one-half the difference of this number with its
complex conjugate, so we can write
Combining this with the result above and taking the square root we obtain
which implies
121
Perhaps the most common application of the uncertainty principle is to the
Cartesian components of the position and momentum operator along the same
direction, for which the canonical commutation relations
and the result above imply Heisenberg’s uncertainty relation
which is equivalent to
122
Perhaps the most common application of the uncertainty principle is to the
Cartesian components of the position and momentum operator along the same
direction, for which the canonical commutation relations
and the result above imply Heisenberg’s uncertainty relation
which is equivalent to
123
Perhaps the most common application of the uncertainty principle is to the
Cartesian components of the position and momentum operator along the same
direction, for which the canonical commutation relations
and the result above imply Heisenberg’s uncertainty relation
which is equivalent to
124
In this form, the uncertainty relation shows that, past a certain point, we can
increase our knowledge of a particle's position along a certain direction only if we
are willing to put up with a concomitant loss of information about its momentum
along the same direction, and vice versa.
More generally, we can increase our knowledge of an operator 𝐴 at the expense
of decreasing our knowledge of observables 𝐴 with which 𝐴 does not commute.
On the other hand, the uncertainty principle is also consistent with our
observation that there is no limit to the precision with which we may
simultaneously specify the value of commuting observables.
Commuting observables are, therefore, often referred to as being compatible
observables.
125
In this form, the uncertainty relation shows that, past a certain point, we can
increase our knowledge of a particle's position along a certain direction only if we
are willing to put up with a concomitant loss of information about its momentum
along the same direction, and vice versa.
More generally, we can increase our knowledge of an operator 𝐴 at the expense
of decreasing our knowledge of observables 𝐡 with which 𝐴 does not commute.
On the other hand, the uncertainty principle is also consistent with our
observation that there is no limit to the precision with which we may
simultaneously specify the value of commuting observables.
Commuting observables are, therefore, often referred to as being compatible
observables.
126
In this form, the uncertainty relation shows that, past a certain point, we can
increase our knowledge of a particle's position along a certain direction only if we
are willing to put up with a concomitant loss of information about its momentum
along the same direction, and vice versa.
More generally, we can increase our knowledge of an operator 𝐴 at the expense
of decreasing our knowledge of observables 𝐡 with which 𝐴 does not commute.
On the other hand, the uncertainty principle is also consistent with our
observation that there is no limit to the precision with which we may
simultaneously specify the value of commuting observables.
Commuting observables are, therefore, often referred to as being compatible
observables.
127
In this form, the uncertainty relation shows that, past a certain point, we can
increase our knowledge of a particle's position along a certain direction only if we
are willing to put up with a concomitant loss of information about its momentum
along the same direction, and vice versa.
More generally, we can increase our knowledge of an operator 𝐴 at the expense
of decreasing our knowledge of observables 𝐡 with which 𝐴 does not commute.
On the other hand, the uncertainty principle is also consistent with our
observation that there is no limit to the precision with which we may
simultaneously specify the value of commuting observables.
Commuting observables are, therefore, often referred to as being compatible
observables.
128
In this lecture, we have explored some consequences of the 3rd postulate.
We saw that the average value obtained in a large number of measurements of
an observable 𝐴 on a set of identically prepared states can be simply expressed in
terms of the expectation value of 𝐴 taken with respect to the state on which the
measurements are made.
We also introduced a notion of the statistical width or uncertainty associated
with quantum mechanical measurement, introducing the RMS uncertainty that
can be computed from the mean value 𝐴 and the mean value of 𝐴2 .
We also proved the uncertainty relation for two arbitrary observables, which
indicates the degree to which they can be known simultaneously, and which
reduces to Heisenberg’s uncertainty relation for position and momentum.
In the next lecture, we discuss how an ensemble of identically prepared systems
can be actually generated, and begin a discussion of the final postulate of our
general formulation of quantum mechanics.
129
In this lecture, we have explored some consequences of the 3rd postulate.
We saw that the average value obtained in a large number of measurements of
an observable 𝐴 on a set of identically prepared states can be simply expressed in
terms of the expectation value of 𝐴 taken with respect to the state on which the
measurements are made.
We also introduced a notion of the statistical width or uncertainty associated
with quantum mechanical measurement, introducing the RMS uncertainty that
can be computed from the mean value 𝐴 and the mean value of 𝐴2 .
We also proved the uncertainty relation for two arbitrary observables, which
indicates the degree to which they can be known simultaneously, and which
reduces to Heisenberg’s uncertainty relation for position and momentum.
In the next lecture, we discuss how an ensemble of identically prepared systems
can be actually generated, and begin a discussion of the final postulate of our
general formulation of quantum mechanics.
130
In this lecture, we have explored some consequences of the 3rd postulate.
We saw that the average value obtained in a large number of measurements of
an observable 𝐴 on a set of identically prepared states can be simply expressed in
terms of the expectation value of 𝐴 taken with respect to the state on which the
measurements are made.
We also introduced a notion of the statistical width or uncertainty associated
with quantum mechanical measurement, introducing the RMS uncertainty that
can be computed from the mean value 𝐴 and the mean value of 𝐴2 .
We also proved the uncertainty relation for two arbitrary observables, which
indicates the degree to which they can be known simultaneously, and which
reduces to Heisenberg’s uncertainty relation for position and momentum.
In the next lecture, we discuss how an ensemble of identically prepared systems
can be actually generated, and begin a discussion of the final postulate of our
general formulation of quantum mechanics.
131
In this lecture, we have explored some consequences of the 3rd postulate.
We saw that the average value obtained in a large number of measurements of
an observable 𝐴 on a set of identically prepared states can be simply expressed in
terms of the expectation value of 𝐴 taken with respect to the state on which the
measurements are made.
We also introduced a notion of the statistical width or uncertainty associated
with quantum mechanical measurement, introducing the RMS uncertainty that
can be computed from the mean value 𝐴 and the mean value of 𝐴2 .
We also proved the uncertainty relation for two arbitrary observables, which
indicates the degree to which they can be known simultaneously, and which
reduces to Heisenberg’s uncertainty relation for position and momentum.
In the next lecture, we discuss how an ensemble of identically prepared systems
can be actually generated, and begin a discussion of the final postulate of our
general formulation of quantum mechanics.
132
In this lecture, we have explored some consequences of the 3rd postulate.
We saw that the average value obtained in a large number of measurements of
an observable 𝐴 on a set of identically prepared states can be simply expressed in
terms of the expectation value of 𝐴 taken with respect to the state on which the
measurements are made.
We also introduced a notion of the statistical width or uncertainty associated
with quantum mechanical measurement, introducing the RMS uncertainty that
can be computed from the mean value 𝐴 and the mean value of 𝐴2 .
We also proved the uncertainty relation for two arbitrary observables, which
indicates the degree to which they can be known simultaneously, and which
reduces to Heisenberg’s uncertainty relation for position and momentum.
In the next lecture, we discuss how an ensemble of identically prepared systems
can be actually generated, and begin a discussion of the final postulate of our
general formulation of quantum mechanics.
133
134