The Learnability of Quantum States

Download Report

Transcript The Learnability of Quantum States

Solving Hard Problems With Light
vs
Scott Aaronson (Assoc. Prof., EECS)
Joint work with Alex Arkhipov
In 1994, something big happened in the
foundations of computer science, whose meaning
is still debated today…
Why exactly was Shor’s algorithm important?
Boosters: Because it means we’ll build QCs!
Skeptics: Because it means we won’t build QCs!
Me: For reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness result for
one of the central computational problems
of modern science: QUANTUM SIMULATION
Use of DoE supercomputers by area
(from a talk by Alán Aspuru-Guzik)
Shor’s Theorem:
QUANTUM SIMULATION is
not solvable efficiently
(in polynomial time),
unless FACTORING is also
Today, a different kind of hardness result for
simulating quantum mechanics
Advantages:
Disadvantages:
Based on more “generic”
complexity assumptions than
the hardness of FACTORING
Applies to relational
problems (problems with many
possible outputs) or sampling
problems, not decision
problems
Gives evidence that QCs have
capabilities outside the entire
“polynomial hierarchy”
Harder to convince a skeptic
that your computer is indeed
Requires only a very simple
solving the relevant hard
kind of quantum computation:
problem
nonadaptive linear optics
(testable before I’m dead?)
Less relevant for the NSA
Bestiary of Complexity Classes
COUNTING
P#P
PERMANENT
BQP
How complexity
theorists
PHsay
“such-and-such is damn
Xunlikely”:
YZ…
P
FACTORING
“If such-and-such is true,NP
then PH
BPP
collapses to a finite
level”
3SAT
Our Results
Suppose the output distribution of any linear-optics circuit
can be efficiently sampled by a classical algorithm. Then the
polynomial hierarchy collapses.
Indeed, even if such a distribution can be sampled by a classical
If our
conjectures
thenhierarchy,
even astill the
computer
with an
oracle for thehold,
polynomial
polynomial
hierarchy
collapses. experiment can
noisy
linear-optics
sampleconjectures
from a probability
Suppose two plausible
are true: the permanent
distribution
of a Gaussian random
matrix that
is no classical
(1) #P-hardcomputer
to approximate,
and
can feasibly
sample from
(2) not too concentrated around 0.
Then the output distribution of a linear-optics circuit can’t
even be approximately sampled efficiently classically, unless
the polynomial hierarchy collapses.
Particle Physics In One Slide
There are two basic types of particle in the universe…
All I can say is, the bosons
got the harder job
BOSONS
FERMIONS
Their transition amplitudes are given respectively by…
Per A 
n
a  


S n i 1
i,
i
Det A 
 1


sgn  
S n
n
a  
i,
i 1
i
High-Level Idea
Estimating a sum of exponentially many positive or
negative numbers: #P-hard
Estimating a sum of exponentially many nonnegative
numbers: Still hard, but known to be in PH
If quantum mechanics could be efficiently simulated
classically, then these two problems would become
equivalent—thereby placing #P in PH, and collapsing PH
So why aren’t we done?
Because real quantum experiments are subject to noise
Would an efficient classical algorithm that simulated a noisy
optics experiment still collapse the polynomial hierarchy?
Main Result: Yes, assuming two plausible conjectures about
permanents of random matrices (the “PCC” and the “PGC”)
Particular experiment we have in mind: Take a system of n
identical photons with m=O(n2) modes. Put each photon in a
known mode, then apply a Haar-random mm unitary
transformation U:
Then measure which
modes have 1 or more
photon in them
U
The Permanent Concentration
Conjecture (PCC)
There exists a polynomial p such that for all n,

n!  1
P r nn  P er X  
 2

X ~ N 0 ,1C
p n   n

Empirically true!
Also, we can prove it with
determinant in place of
permanent
The Permanent-of-Gaussians
Conjecture (PGC)
Let X be an nn matrix of independent, N(0,1) complex
Gaussian entries. Then approximating Per(X) to within a
1/poly(n) multiplicative error, for a 1-1/poly(n) fraction
of X, is a #P-hard problem.
Experimental Prospects
What would it take to implement
the requisite experiment?
• Reliable phase-shifters and
beamsplitters, to implement an
arbitrary unitary on m photon
modes
• Reliable single-photon sources
• Photodetector arrays that can
reliably distinguish 0 vs. 1 photon
But crucially, no nonlinear optics
or postselected measurements!
Our Proposal:
Concentrate on (say)
n=20 photons and
m=400 modes, so that
classical simulation is
nontrivial but not
impossible
Summary
I often say that Shor’s algorithm presented us with three
choices. Either
(1) The laws of physics are exponentially hard to simulate
on any computer today,
(2) Textbook quantum mechanics is false, or
(3) Quantum computers are easy to simulate classically.
For all intents and purposes?