The Learnability of Quantum States
Download
Report
Transcript The Learnability of Quantum States
Efficient Simulation of Quantum
Mechanics Collapses the
Polynomial Hierarchy
(yes, really)
Scott Aaronson
Alex Arkhipov
MIT
In 1994, something big happened in our
field, whose meaning is still debated today…
Why exactly was Shor’s algorithm important?
Boosters: Because it means we’ll build QCs!
Skeptics: Because it means we won’t build QCs!
Me: For reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness result for
one of the central computational problems
of modern science: QUANTUM SIMULATION
Use of DoE supercomputers by area
(from a talk by Alán Aspuru-Guzik)
Shor’s Theorem:
QUANTUM SIMULATION is
not in BPP, unless
FACTORING is also
Today: A completely different kind of hardness
result for simulating quantum mechanics
Advantages of our result:
Disadvantages:
Based on P#PBPPNP rather
than FACTORINGBPP
Applies to distributional and
relation problems, not to
decision problems
Applies to an extremely
weak subset of QC
(“Non-interacting bosons,” or linear
optics with a single nonadaptive
measurement at the end)
Even gives evidence that QCs
have capabilities outside PH
Harder to convince a skeptic
that your QC is really solving
the relevant hard problem
Let C be a quantum circuit, which acts on n qubits
initialized to the all-0 state
|0
|0
C
C defines a distribution DC
over n-bit output strings
|0
QSAMPLING: Given C as input, sample a string x from any
probability distribution D such that D D
C
Certainly this problem is BQP-hard
Our Result: Suppose QSAMPLING0.01 is in
probabilistic polytime. Then P#P=BPPNP
(so in particular, PH collapses to the third level)
More generally:
Suppose QSAMPLING0.01 is in probabilistic polytime with A
A
#P
NP
oracle. Then P BPP
So QSAMPLING can’t even be in BPPPH without collapsing PH!
Extension to relational problems:
Suppose FBQP=FBPP. Then P#P=BPPNP
“QSAMPLING is #P-hard under BPPNP-reductions”
(Provided the BPPNP machine gets to pick the random bits used by
the QSAMPLING oracle)
Warmup: Why Exact QSAMPLING Is Hard
Let f:{0,1}n{-1,1} be any efficiently computable function.
Suppose we apply the following quantum circuit:
|0
H
|0
H
|0
H
H
f
H
H
Then the probability of observing the all-0 string is
1
p : 2 n f x
2 x0,1n
2
Claim 1: p is #P-hard to
estimate (up to a constant factor)
Related to my result that
PostBQP=PP
Claim 2: Suppose QSAMPLING
was classically easy. Then we
could estimate p in BPPNP
Proof: Let M be a classical
algorithm for QSAMPLING, and
Proof: If we can estimate p,
let r be its randomness. Use
then we can also compute
approximate counting to
xf(x) using binary search
2 M r outputs 0 n
estimate
Pr
and padding
r
1
f x
2n
n
2
Conclusion: Suppose QSAMPLING
x0,01is easy.
Then P#P=BPPNP
p :
So Why Aren’t We Done?
Ultimately, our goal is to show that Nature can actually
perform computations that are hard to simulate classically,
thereby overthrowing the Extended Church-Turing Thesis
But any real quantum system is subject to noise—meaning
we can’t actually sample from DC, but only from some
distribution D such that D D
C
Could that be easy, even if sampling from DC itself was hard?
To rule that out, we need to show that even a fast classical
algorithm for QSAMPLING would imply P#P=BPPNP
The Problem
Suppose M “knew” that all we cared about was the final
amplitude of |00
(i.e., that’s where we shoehorned a hard #P-complete instance)
Indeed. But to bring the permanent into
quantum
computing,
we need
brief detour
Then it could
adversarially
choose
to beawrong
about that
into particle
physics
(!) be a good
one, exponentially-small
amplitude
and still
sampler We’ll have to work harder … but as a bonus,
not only
rule out approximate
So we need a we’ll
quantum
computation
that more “robustly”
but problem
approximate samplers for an
encodes asamplers,
#P-complete
extremely weak kind of QC
Hmm … robust #P-complete problem
… you mean like the PERMANENT?
Particle Physics In One Slide
There are two types of particles in Nature…
BOSONS
FERMIONS
Force-carriers: photons, gluons…
Matter: quarks, electrons…
Swap two identical bosons
quantum state | is unchanged
Swap two identical fermions
quantum state picks up -1 phase
Bosons can “pile on top of each
other” (and do: lasers, BoseEinstein condensates…)
Pauli exclusion principle: no two
fermions can occupy same state
Consider a system of n identical, non-interacting particles…
1
2
3
tinitial
Let aijC be the amplitude for
1
transitioning from initial state i
to final state j
2
All I can say is, the bosons
a11 a1n
got the harder job…
Let A :
3
a
a
tfinal
nn
n1
Then what’s the total amplitude for the above process?
Per A
Det A
if the particles are bosons
if they’re fermions
The BOSONSAMPLING Problem
Input: An mn complex matrix A, whose n columns are
orthonormal vectors in Cm (here mn2)
Let a configuration be a list S=(s1,…,sm) of nonnegative
integers with s1+…+sm=n
Task: Sample each configuration S with probability
pS :
Per AS
2
s1! sm !
Neat Fact: The pS’s sum to 1
where AS is an nn matrix
containing si copies of the
ith row of A
Physical Interpretation: We’re simulating a unitary evolution
of n identical bosons, each of which can be in m=poly(n)
“modes.” Initially, modes 1 to n have one boson each and
modes n+1 to m are unoccupied. After applying the unitary,
we measure the number of bosons in each mode.
Example:
1 / 2 1 / 2
1 / 2 1 / 2
A
1/ 2 1/ 2
1 / 2 1 / 2
2 22
1 1 /121/ /221 /121/ /22 11
both
PrPr
bosons
stay
gowhere
to mode
they
3are
Per
Per
Per
0
bosons
Prbosons
shift
one
mode
2! 1 /121/ /22 1/112/ /22 48
Theorem (implicit in Lloyd 1996): BOSONSAMPLING QSAMPLING
Proof Sketch: We need to simulate a system of n bosons on
a conventional quantum computer
The basis states |s1,…,sm (s1+…+sm=n) just record the
occupation number of each mode
Given any “scattering matrix” UCmm on the m modes, we
can decompose U as a product U1…UT, where T=O(m2) and
each Ut acts only on 2-dimensional subspaces of the form
s1 , , sm , s1 , , si 1, , s j 1, , sm
for some (i,j)
Theorem (Valiant 2001, Terhal-DiVincenzo 2002):
FERMIONSAMPLINGBPP
In stark contrast, we prove the following:
Suppose BOSONSAMPLINGBPP. Then given an arbitrary
matrix XCnn, one can approximate |Per(X)|2 in BPPNP
But I thought we could
approximate the permanent in BPP
anyway, by Jerrum-Sinclair-Vigoda!
Yes, for nonnegative matrices.
For general matrices, approximating
|Per(X)|2 is #P-complete.
Outline of Proof
Given a matrix XCnn , with every entry satisfying |xij|1,
we want to approximate |Per(X)|2 to within n!
This is already #P-complete (proof: standard padding tricks)
Notice that |Per(X)|2 is a degree-2n polynomial in the
entries of X (as well as their complex conjugates)
As in Lipton/LFKN, we can let V be some random curve in
Cnn that passes through X, and let Y1,…,YkCnn be other
matrices on V (where kn2)
If we can estimate |Per(Yi)|2 for most i, then we can
estimate |Per(X)|2 using noisy polynomial interpolation
But Linear Interpolation Doesn’t Work!
A random line through XCnn
“retains too much information”
about X
X
We need to redo Lipton/LFKN to
work over the complex numbers
rather than finite fields
Solution: Choose a matrix Y(t) of random trigonometric
polynomials, such that Y(0)=X
L
yij t : ij e 2it , yij 0 xij
1
For sufficiently large L and
t>>0, each yij(t) will look like
an independent Gaussian,
uncorrelated with xij:
Furthermore, Per(Y(t)) is a univariate polynomial in e2it of
degree at most Ln
Questions: How do we sample Y(t) and Y1,…,Yk efficiently?
How do we do the noisy polynomial interpolation?
Lazy answer: Since we’re a BPPNP machine, just use
rejection sampling!
The problem reduces to estimating |Per(Y)|2, for a matrix
YCnn of (essentially) independent N(0,1) Gaussians
To do this, generate a random mn column-orthonormal
matrix A that contains Y/m as an nn submatrix
(i.e., such that AS=Y/m for some random configuration S)
Let M be our BPP algorithm for approximate BOSONSAMPLING,
and let r be M’s randomness
Use approximate counting (in BPPNP) to estimate
PrM r outputsS
r
Intuition: M has no way to determine which configuration
S we care about. So if it’s right about most configurations,
then w.h.p. we must have PrM r outputs S 1 Per Y 2
r
m2n
Problem: Bosons like to pile on top of each other!
Call a configuration S=(s1,…,sm) good if every si is 0 or 1 (i.e.,
there are no collisions between bosons), and bad otherwise
We assumed for simplicity that all configurations were good
But suppose bad configurations dominated. Then M could
be wrong on all good configurations, yet still “work”
Furthermore, the “bosonic birthday paradox” is even worse
than the classical one!
2
Prboth particles land in the same box ,
3
rather than ½ as with classical particles
Fortunately, we show that with n bosons and mkn2 boxes,
the probability of a collision is still at most (say) ½
Experimental Prospects
What would it take to implement
BOSONSAMPLING with photonics?
• Reliable phase-shifters
• Reliable beamsplitters
• Reliable single-photon sources
• Reliable photodetectors
But crucially, no nonlinear optics
or postselected measurements!
Problem: The output will be a collection of nn matrices
B1,…,Bk with “unusually large permanents”—but how
would a classical skeptic verify that |Per(Bi)|2 was large?
Our Proposal: Concentrate on (say) n=30 photons, so that
classical simulation is difficult but not impossible
Open Problems
Does our result relativize? (Conjecture: No)
Can we use BOSONSAMPLING to do universal QC? Can we use
it to solve any decision problem outside BPP?
Can you convince a skeptic (who isn’t a BPPNP machine) that
your QC is indeed doing BOSONSAMPLING?
Can we get unlikely complexity collapses from P=BQP or
PromiseP=PromiseBQP?
Would a nonuniform sampling algorithm (one that was
different for each scattering matrix A) have unlikely
complexity consequences?
Is PERMANENT #P-complete for +1/-1 matrices (with no 0’s)?
Conclusion
I like to say that we have three choices: either
(1) The Extended Church-Turing Thesis is false,
(2) Textbook quantum mechanics is false, or
(3) QCs can be efficiently simulated classically.
For all intents and purposes