Transcript I t

Basics of Quantum Theory
Systems and Subsystems
• Intuitively speaking, a physical system consists
of a region of spacetime & all the entities (e.g.
particles & fields) contained within it.
– The universe (over all time) is a physical system
– Transistors, computers, people: also phys. systs.
• One physical system A is a subsystem of
another system B (write AB) iff A is
A
completely contained within B.
• Later, we may try to make these definitions
more formal & precise.
B
Closed vs. Open Systems
• A subsystem is closed to the extent that no
particles, information, energy, or entropy (terms
to be defined) enter or leave the system.
– The universe is (presumably) a closed system.
– Subsystems of the universe may be almost closed
• Often in physics we consider statements about
closed systems.
– These statements may often be perfectly true only in
a perfectly closed system.
– However, they will often also be approximately true
in any nearly closed system (in a well-defined way)
Concrete vs. Abstract Systems
• Usually, when reasoning about or interacting
with a system, an entity (e.g. a physicist) has in
mind a description of the system.
• A description that contains every property of the
system is an exact or concrete description.
– That system (to the entity) is a concrete system.
• Other descriptions are abstract descriptions.
– The system (as considered by that entity) is an
abstract system, to some degree.
• We nearly always deal with abstract systems!
– Based on the descriptions that are available to us.
System Descriptions
• Classical physics:
– A system could be completely described by giving a
single state S out of the set  of all possible states.
• Statistical mechanics:
– Instead, give a probability distribution function
p:[0,1] stating that the system is  where 


in state S with probability p(S).
  p ( S )  1
• Quantum mechanics:
 SΣ

– Give a complex-valued wavefunction :  ℂ,
 where

|(S)|1, implying the system is in


2
  Ψ( S )  1
state S with probability |(S)|2.
 SΣ

States & State Spaces
• A possible state S of an abstract system A
(described by a description D) is any concrete
system C that is consistent with D.
– I.e., it is possible that the system in question could
be completely described by the description of C.
• The state space of A is the set of all possible
states of A.
• So far, the concepts we’ve discussed can be
applied to either classical or quantum physics
– Now, let’s get to the uniquely quantum stuff…
Distinguishability of States
• Classical & quantum mechanics differ crucially
regarding the distinguishability of states.
• In classical mechanics, there is no issue:
– Any two states s,t are either the same (s=t), or
different (st), and that’s all there is to it.
• In quantum mechanics (i.e. in reality):
– There are pairs of states st that are mathematically
distinct, but not 100% physically distinguishable.
– Such states cannot be reliably distinguished by any
number of measurements, no matter how precise.
• But you can know the real state (with high probability), if
you prepared the system to be in a certain state.
State Vectors & Hilbert Space
• Let S be any maximal set of distinguishable
possible states s, t, … of an abstract system A.
– I.e., no possible state that is not in S is perfectly
distinguishable from all members of S.
• Identify the elements of S with unit-length,
mutually-orthogonal (basis) vectors in an
abstract complex vector space ℋ.
– The system’s “Hilbert space”
• Postulate 1: Each possible state  of
system A can be identified with a unitlength vector in the Hilbert space ℋ.
t

s
(Abstract) Vector Spaces
• A concept from abstract linear algebra.
• A vector space, in the abstract, is any set of
objects that can be combined like vectors, i.e.:
– You can add them
• Addition is associative & commutative
• Identity law holds for addition to zero vector 0
– You can multiply them by scalars (incl. 1)
• Associative, commutative, and distributive laws hold
• Note: There is no inherent basis (set of axes)
– The vectors themselves are the fundamental objects,
rather than being just lists of coordinates
Hilbert spaces
• A Hilbert space ℋ is a vector space in which
the scalars are complex numbers, with an inner
product (dot product) operation  : ℋ×ℋ  C
– See Hirvensalo p. 107 for defn. of inner product:
xy = (yx)*
(* = complex conjugate)
xx  0
xx = 0 if and only if x = 0
“Component”
xy is linear, under scalar multiplication
picture:
and vector addition within both x and y
y
x
xy/|x|
Another notation often used:
x y  x y
“bracket”
Review: The Complex Number System
• It is the extension of the real number system via
closure under exponentiation.
i  -1
c  a  bi (c  C, a,b  R)
The “imaginary” Re [c ]  a
unit
Im [c]  b
b

• (Complex) conjugate:
c* = (a + bi)*  (a  bi)
• Magnitude or absolute value:
c  c*c  (a  bi )(a  bi )  a 2  b2
|c|2 = c*c = a2+b2
+i
c
a
+
“Real” axis
“Imaginary”
i
axis
Review: Complex Exponentiation
• Powers of i are complex units:
e  cos  i sin 
θi
• Note:
ei/2 = i
ei = 1
e3 i /2 =  i
e2 i = e0 = 1
ei
+i

1
+1
i
Vector Representation of States
• Let S={s0, s1, …} be any maximal set of
mutually distinguishable states, indexed by i.
• A basis vector vi identified with the ith such
state can be represented as a list of numbers:
s0 s1 s2 si-1 si si+1
vi = (0, 0, 0, …, 0, 1, 0, … )
• Arbitrary vectors v in the Hilbert space ℋ can
then be defined by linear combinations of the vi:
v   ci vi  (c0 , c1 ,)
*
i
x
y

x
 yi
• And the inner product is given by:
i
i
Dirac’s Ket Notation
• Note: The inner product
x y   x* yi
definition is the same as the “Bracket” i
 y1 
matrix product of x, as a
*
*




 x1 x2   y2 
conjugated row vector,
  
times y, as a normal column vector.
• This leads to the definition, for state s, of:
i
– The “bra” s| means the row matrix [c0* c1* …]
– The “ket” |s means the column matrix
  c1 
†
• The adjoint operator takes any matrix M
to its conjugate transpose M†  MT*, so
s| can be defined as |s†, and xy = x†y.
c 
 2
  
Distinguishability of States, again
• State vectors s and t are (perfectly)
distinguishable or orthogonal (write st)
iff s†t = 0. (Their inner product is zero.)
• State vectors s and t are perfectly
indistinguishable or identical (write s=t)
iff s†t = 1. (Their inner product is one.)
• Otherwise, s and t are both non-orthogonal, and
non-identical. Not perfectly distinguishable.
• We say, “the amplitude of state s, given state t,
is s†t”. Note: amplitudes are complex numbers.
Probability and Measurement
• A yes/no measurement is an interaction
designed to determine whether a given
system is in a certain state s.
• The amplitude of state s, given the actual state t
of the system determines the probability of
getting a “yes” from the measurement.
• Postulate 2: For a system prepared in state t,
any measurement that asks “is it in state s?”
will say “yes” with probability P(s|t) = |s†t|2
– After the measurement, the state is changed, in a
way we will define later.
A Simple Example
• Suppose abstract system S has a set of only 4
distinguishable possible states, which we’ll
call s0, s1, s2, and s3, with corresponding ket
vectors |s0, |s1, |s2, and |s0.
• Another possible state is then the unit vector
1
i
s0 
s3
2
2
1 2 


 0 
 0 


  i 2 
• Which is equal to the column matrix:
• If measured to see if it is in state s0,
we have a 50% chance of getting a “yes”.
Linear Operators
• V,W: Vector spaces.
• A linear operator A from V to W is a linear
function A:VW. An operator on V is an
operator from V to itself.
• Given bases for V and W, we can represent
linear operators as matrices.
• An Hermitian operator H on V is a linear
operator that is self-adjoint (H=H†).
– Its diagonal elements are real.
Eigenvalues & Eigenvectors
• v is called an eigenvector of linear operator A
iff A just multiplies v by a scalar a, i.e. Av=av
– “eigen” (German) means “characteristic”
• a, the eigenvalue corresponding to eigenvector
v, is just the scalar that A multiplies v by
• a is degenerate if it is shared by 2 eigenvectors
that are not scalar multiples of each other
• Any Hermitian operator has all real-valued
eigenvectors, which form an orthogonal set
Observables
• A Hermitian operator H on the set V is called an
observable if there is an orthonormal (all unitlength, and mutually orthogonal) subset of its
eigenvectors that forms a basis of V.
• Postulate 3: Every measurable physical
property of a system can be described by a
corresponding observable H. Measurement
outcomes correspond to eigenvalues of H.
• The measurement can also be thought of as a
yes-no test that compares the state with each of
the observable’s normalized eigenvectors.
Wavefunctions
• Given any set Sℋ of system states,
– Whether all mutually distinguishable, or not,
• a quantum state vector v can be translated to a
wavefunction :Sℂ, giving, for each state
sS, the amplitude (s) of that state.
– When s is some other state vector, and the “actual”
state is v, then (s) is just s†v.
– Whenever S includes a basis set,  determines v.
•  is called a “wavefunction” because its
dynamics takes the form of a wave equation
when S ranges over a space of positional states.
Time Evolution
• Postulate 4: (Closed) systems evolve (change
state) over time via unitary transformations.
t2 = Ut1t2 t1
• Note that since U is linear, a small-factor
change in the amplitude of a particular state at
t1 leads to a correspondingly small change in
the amplitude of the corresponding state at t2!
– Chaotic sensitivity to initial conditions requires an
ensemble of initial states that are different enough
to be distinguishable (in the sense we defined)
• Indistinguishable initial states never beget distinguishable
outcomes  true chaotic/analog computing doesn’t exist
Schrödinger's Wave Equation
• Start w. classical Hamiltonian energy equation:
H = K + P (K = kinetic, P = potential)
• Express K in terms of momentum:
K = ½mv2 = p2/2m
(Where
• Substitute H = i∂/t and p = i∂/x:
∂/a ≝ ∂/∂a)
2
2

 
i  i
 P ( x, t )
2
t
2m x
• Apply to wavefunction Ψ over position states x:
 ( x, t )
 2  2  ( x, t )
i
 i
 P( x, t ) ( x, t )
2
t
2m x
Multidimensional Form
For a system with states given by (x,t) where t is a
global time coordinate, and x describes N/3
particles (p0,…,pN/3−1) with masses (m0,…,mN/3−1)
in a 3-D Euclidean space, where each pi is located
at coordinates (x3i, x3i+1, x3i+2), and where particles
interact with potential energy function P(x,t), the
wavefunction (x,t) obeys the following (2ndorder, linear, partial) differential equation:

  N 1 1  2


 
 ( x, t )  P( x, t )  i  ( x, t )
2

2  j 0 m j / 3 x j
t

Features of the wave equation
• Particles’ momentum state p is encoded
by their wavelength , as per p=h/
• The energy of a state is given by the frequency f
of rotation of the wavefunction in the
complex plane: E=hf.
• By simulating this simple equation, one can
observe basic quantum phenomena, such as:
– Interference fringes
– Tunneling of wave packets through
potential energy barriers
• Demo of SCH simulator
Gaussian wave packet moving to the right;
Array of small sharp potential-energy barriers
Initial reflection/refraction of wave packet
A little later
Aimed a little higher
A faster-moving particle
Compound Systems
• Let C=AB be a system composed of two
separate subsystems A,B each with vector
spaces A, B with bases |ai, |bj.
• The state space of C is a vector space C=AB
given by the tensor product of spaces A and B,
with basis states labeled as |aibj.
• E.g., if A has state a=ca0|a0  + ca1 |a1,
while B has state b=cb0|b0  + cb1 |b1, then
C has state c = ab=
ca0cb0|a0b0 + ca0cb1|a0b1 +
ca1cb0|a1b0 + ca1cb1|a1b1
Entanglement
• If the state of compound system C can be
expressed as a tensor product of states of two
independent subsystems A and B,
c = ab,
• then, we say that A and B are not entangled,
and they have individual states.
– E.g. |00+|01+|10+|11=(|0+|1)(|0+|1)
• Otherwise, A and B are entangled (basically
correlated); their states are not independent.
– E.g. |00+|11
Size of Compound State Spaces
• Note that a system composed of many separate
subsystems has a very large state space.
• Say it is composed of N subsystems, each with
k basis states:
– The compound system has kN basis states!
– There are states of the compound system having
nonzero amplitude in all these kN basis states!
– In such states, all the distinguishable basis states are
(simultaneously) possible outcomes (each with
some corresponding probability)
– Illustrates the “many worlds” nature of quantum
mechanics.
Unitary Transformations
• A matrix (or linear operator) U is unitary iff its
inverse equals its adjoint: U1 = U†
• Some properties of unitary transformations:
–
–
–
–
Invertible, bijective, one-to-one.
The set of row vectors is orthonormal.
Ditto for the set of column vectors.
Preserves vector length: |U| = | |
• Therefore also preserves total probability over all states:
2
2
   (si )
– Corresponds to a change of basis, from one
orthonormal basis to another.
– Or, a generalized rotation of in Hilbert space
i
After a Measurement?
• After a system or subsystem is measured from
outside, its state appears to collapse to exactly
match the measured outcome
– the amplitudes of all states perfectly distinguishable
from states consistent w. that outcome drop to zero
– states consistent with measured outcome can be
considered “renormalized” so their probs. sum to 1
• This “collapse” seems nonunitary (& nonlocal)
– However, this behavior is now explicable as the
expected consensus phenomenon that would be
experienced even by entities within a closed,
perfectly unitarily-evolving world (Everett, Zurek).
Pointer States
• For a given system interacting with a given
environment,
– The system-environment interactions can be
considered measurements of a certain observable of
the system by the environment, and vice-versa.
• For each observable there are certain basis
states that are characteristic of that observable.
– The eigenstates of the observable
• A pointer state of a system is an eigenstate of
the system-environment interaction observable.
– The pointer states are the inherently stable states.
Key Points to Remember:
• An abstractly-specified system may have many
possible states; only some are distinguishable.
• A quantum state/vector/wavefunction  assigns
a complex-valued amplitude (si) to each
distinguishable state si (out of some basis set)
• The probability of state si is |(si)|2, the square
of (si)’s length in the complex plane.
• States evolve over time via unitary (invertible,
length-preserving) transformations.
Simulating the Schroedinger
Wave Equation
A Perfectly Reversible Discrete
Numerical Simulation Technique
Simulating Wave Mechanics
• The basic problem situation:
– Given:
• A (possibly complex) initial wavefunction

0  ( x, t0 ) in an N-dimensional position basis, and
• a (possibly complex and time-varying) potential energy

function V ( x , t ),
• a time t after (or before) t0,
– Compute:
•  ( x, t )
• Many practical physics applications...
The Problem with the Problem
• An efficient technique (when possible):
–
–
–
–
–
Convert V to the corresponding Hamiltonian H.
Find the energy eigenstates of H.
Project  onto eigenstate basis.
Multiply each component by e iH ( t t0 ).
Project back onto position basis.
• Problem:
– It may be intractable to find the eigenstates!
• We resort to numerical methods...
History of Reversible Schrödinger Sim.
See http://www.cise.ufl.edu/~mpf/sch
• Technique discovered by Ed Fredkin and
student William Barton at MIT in 1975.
• Subsequently proved by Feynman to exactly
conserve a certain probability measure:
Pt = Rt2 + It1·It+1
(R=real, I=imag., t=time step index)
• 1-D simulations in C/Xlib written by Frank at
MIT in 1996. Good behavior observed.
• 1 & 2-D simulations in Java, and proof of
stability by Motter at UF in 2000.
• User-friendly Java GUI by Holz at UF, 2002.
Difference Equations
• Consider any system with state x that evolves
according to a diff. eq. that is 1st-order in time:
x = f(x)
• Discretize time to finite scale t, and use a
difference equation instead:
x(t + t) = x(t) + t ·f(x(t))
• Problem: Behavior not always numerically
stable.
– Errors can accumulate and grow exponentially.
Centered Difference Equations
• Discretize derivatives in a symmetric fashion:
d x x(t  t )  x(t  t )

dt
2t
• Leads to update rules like:
x(t + t) = x(t  t) + 2t ·f(x(t))
• Problem: States at odd- vs. evennumbered time steps not constrained
to stay close to each other!
x1
+
x3
+
2t·f
g
g
g
x2
+
x4
Centered Schrödinger Equation
• Schrödinger’s equation for 1 particle in 1-D:
 2 d2

d
i  ( x, t )   
 V ( x, t )  ( x, t )
2
dt
 2m dx

• Replace time (& also space) derivatives with
centered differences.  (t  t )   (t  t )  i  g ( (t ))
• Centered difference equation has real
R1
part at odd times that depends only on
g

I2
imaginary part at even times, &
g
R3
+
vice-versa.
– Drift not an issue - real & imaginary
parts represent different state components!

g
I4
Proof of Stability
• Technique is proved perfectly numerically
stable & convergent assuming V is 0 and
x2/t > /m
(an angular velocity)
• Elements of proof:
– Lax-Richmyer equivalence: convergencestability.
– Analyze amplitudes of Fourier-transformed basis
• Sufficient due to Parseval’s relation
– Use theorem (cf. Strikwerda) equating stability to
certain conditions on the roots of an amplification
polynomial (g,), which are satisfied by our rule.
• Empirically, technique looks perfectly stable
even for more complex potential energy funcs.
Phenomena Observed in Model
•
•
•
•
•
•
•
Perfect reversibility
Wave packet momentum
Conservation of probability mass
Harmonic oscillator
Tunneling/reflection at potential energy barriers
Interference fringes
Diffraction
Interesting Features of this Model
• Can be implemented perfectly reversibly, with
zero asymptotic spacetime overhead
– Every last bit is accounted for!
• As a result, algorithm can run adiabatically,
with power dissipation approaching zero
– Modulo leakage & frictional losses
• Can map it to a unitary quantum algorithm
– Direct mapping:
• Classical reversible ops only, no quantum speedup
– Indirect (implicit) mapping:
• Simulate p particles on kd lattice sites using pd lg k qubits
• Time per update step is order pd lg k instead of kpd