Another version - Scott Aaronson

Download Report

Transcript Another version - Scott Aaronson

The Computational Complexity of
Linear Optics
vs
Scott Aaronson (MIT)
Joint work with Alex Arkhipov
In 1994, something big happened in the
foundations of computer science, whose meaning
is still debated today…
Why exactly was Shor’s algorithm important?
Boosters: Because it means we’ll build QCs!
Skeptics: Because it means we won’t build QCs!
Me: Even for reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness result for
one of the central computational problems
of modern science: QUANTUM SIMULATION
Use of DoE supercomputers by area
(from a talk by Alán Aspuru-Guzik)
Shor’s Theorem:
QUANTUM SIMULATION is
not in probabilistic
polynomial time, unless
FACTORING is also
Today, a different kind of hardness result for
simulating quantum mechanics
Advantages:
Disadvantages:
Based on more “generic”
complexity assumptions than
the hardness of FACTORING
Applies to relational
problems (problems with many
possible outputs) or sampling
problems, not decision
problems
Gives evidence that QCs have
capabilities outside the
polynomial hierarchy
Harder to convince a skeptic
that your QC is indeed
Only involves linear optics
solving the relevant hard
(With single-photon Fock state inputs,
and nonadaptive multimode photon- problem
detection measurements)
Less relevant for the NSA
Bestiary of Complexity Classes
COUNTING
P#P
PERMANENT
BQP
How complexity
theorists
PHsay
“such-and-such is damn
Xunlikely”:
YZ…
P
FACTORING
“If such-and-such is true,NP
then PH
BPP
collapses to a finite
level”
3SAT
Our Results
Suppose the output distribution of any linear-optics circuit
can be efficiently sampled classically (e.g., by Monte Carlo).
Then the polynomial hierarchy collapses (indeed P#P=BPPNP).
Indeed, even if such a distribution can be sampled by a classical
If our conjectures hold, then even a
computer with an oracle for the polynomial hierarchy, still the
noisy
linear-optics
polynomial
hierarchy
collapses. experiment can
sampleconjectures
from a probability
Suppose two plausible
are true: the permanent
of a Gaussiandistribution
random matrixthat
is no classical
computer
can feasibly
sample from
(1) #P-hard
to approximate,
and
(2) not too concentrated around 0.
Then the output distribution of a linear-optics circuit can’t
even be approximately sampled efficiently classically, unless
the polynomial hierarchy collapses.
Related Work
Knill, Laflamme, Milburn 2001: Linear optics with adaptive
measurements yields universal QC
Valiant 2002, Terhal-DiVincenzo 2002: Noninteracting fermions
can be simulated in P
A. 2004: Quantum computers with postselection on unlikely
measurement outcomes can solve hard counting problems
(PostBQP=PP)
Shepherd, Bremner 2009: “Instantaneous quantum
computing” can solve sampling problems that seem hard
classically
Bremner, Jozsa, Shepherd 2010: Efficient simulation of
instantaneous quantum computing would collapse PH
Particle Physics In One Slide
There are two basic types of particle in the universe…
All I can say is, the bosons
got the harder job
BOSONS
FERMIONS
Their transition amplitudes are given respectively by…
Per  A 
n
a  


S n i 1
i,
i
Det  A 
 1


sgn 
S n
n
a  
i,
i 1
i
Linear Optics for Dummies (or computer scientists)
Computational basis states have the form |S=|s1,…,sm,
where s1,…,sm are nonnegative integers such that s1+…+sm=n
n = # of identical photons
m = # of modes
For us, m>n
Starting from a fixed initial state—say, |I=|1,…,1,0,…0—
you get to choose any mm mode-mixing unitary U
U induces an
 m  n  1  m  n  1

  

n  
n 

states, defined by
S  U  T 
unitary (U) on n-photon
Per U S ,T 
s1! sm !t1!t m !
Here US,T is an nn matrix obtained by taking si copies of the
ith row of U and tj copies of the jth column, for all i,j
Then you get to measure (U)|I in the computational basis
Upper Bounds on the Power of Linear Optics
Theorem (Feynman 1982, Abrams-Lloyd 1996): Linear-optics
computation can be simulated in BQP
Proof Idea: Decompose the mm unitary U into a product of
O(m2) elementary “linear-optics gates” (beamsplitters and
phaseshifters), then simulate each gate using polylog(n)
standard qubit gates
Theorem (Bartlett-Sanders et al.): If the inputs are Gaussian
states and the measurements are homodyne, then linearoptics computation can be simulated in P
Theorem (Gurvits): There exist classical algorithms to
approximate S|(U)|T to additive error  in randomized
poly(n,1/) time, and to compute the marginal distribution on
photon numbers in k modes in nO(k) time
By contrast, exactly sampling the distribution over
all n photons is extremely hard! Here’s why …
Given any matrix ACnn, we can construct an mm modemixing unitary U (where m2n) as follows:
 A B 

U  
 C D
Suppose we start with |I=|1,…,1,0,…,0 (one photon in
each of the first n modes), apply (U), and measure.
Then the probability of observing |I again is
p : I  U  I
2

2n
Per A
2
Claim 1: p is #P-complete to
estimate (up to a constant factor)
Idea: Valiant proved that the
PERMANENT is #P-complete.
Can use a classical reduction
to go from a multiplicative
approximation of |Per(A)|2
to Per(A) itself.
Claim 2: Suppose we had a
fast classical algorithm for
linear-optics sampling. Then
we could estimate p in BPPNP
Idea: Let M be our classical
sampling algorithm, and let r
be its randomness. Use
approximate counting to
estimate Pr M r  outputs I


2
2
2n
: I  U
I a fast
 classical
Per A algorithm
Conclusion:pSuppose
we had
r
for linear-optics sampling. Then P#P=BPPNP.
High-Level Idea
Estimating a sum of exponentially many positive or
negative numbers: #P-complete
Estimating a sum of exponentially many nonnegative
numbers: Still hard, but known to be in BPPNP  PH
If quantum mechanics could be efficiently simulated
classically, then these two problems would become
equivalent—thereby placing #P in PH, and collapsing PH
Extensions:
- Even simulation of QM in PH would imply P#P = PH
- Can design a single hard linear-optics circuit for each n
- Can let the inputs be coherent rather than Fock states
So why aren’t we done?
Because real quantum experiments are subject to noise
Would an efficient classical algorithm that sampled from a
noisy distribution—one that was only 1/poly(n)-close to the
true distribution in variation distance—still collapse the
polynomial hierarchy?
Main Result: Yes, assuming two plausible conjectures about
random permanents (the “PGC” and the “PCC”)
Difficulty in showing this: The sampler might adversarially
neglect to output the one submatrix whose permanent we
care about! So we’ll need to “smuggle” the PERMANENT
instance we care about into a random submatrix
The Permanent-of-Gaussians
Conjecture (PGC)
There exist ε,δ ≥ 1/poly(n) for which the following problem is
#P-hard. Given a Gaussian random matrix X drawn from
N(0,1)Cn×n, output a complex number z such that
z
1   ,
Per  X 
with probability at least 1- over X.
We can prove the conjecture if =0 or =0! What makes it
hard is the combination of average-case and approximation
The Permanent Concentration
Conjecture (PCC)
For all polynomials q, there exists a polynomial p such that
for all n,

n! 
1
Pr nn  Per  X  


X ~ N 0 ,1C
pn  qn 

Empirically true!
Also, we can prove it with
determinant in place of
permanent
Main Result
Take a system of n identical photons with m=O(n2) modes.
Put each photon in a known mode, then apply a Haarrandom mm unitary transformation U:
U
Let D be the distribution that results from measuring the
photons. Suppose there’s a fast classical algorithm that takes
U as input, and samples any distribution even 1/poly(n)-close
to D in variation distance. Then assuming the PGC and PCC,
BPPNP=P#P and hence PH collapses
Idea: Given a Gaussian random matrix A, we’ll “smuggle” A
into the unitary transition matrix U for m=O(n2) photons—in
such a way that S|(U)|I=Per(A), for some basis state |S
Useful lemma we rely on: given a Haar-random mm unitary
matrix, an nn submatrix looks approximately Gaussian
Then the classical sampler has “no way of knowing” which
submatrix of U we care about—so even if it has 1/poly(n)
error, with high probability it will return |S with probability
|Per(A)|2
Then, just like before, we can use approximate counting to
estimate Pr[|S]|Per(A)|2 in BPPNP
Assuming the PCC, the above lets us estimate Per(A) itself in
BPPNP
And assuming the PGC, estimating Per(A) is #P-hard
Problem: Bosons like to pile on top of each other!
Call a basis state S=(s1,…,sm) good if every si is 0 or 1 (i.e.,
there are no collisions between photons), and bad otherwise
If bad basis states dominated, then our sampling algorithm
might “work,” without ever having to solve a hard
PERMANENT instance
Furthermore, the “bosonic birthday paradox” is even worse
than the classical one!
2
Prboth particles land in the same box   ,
3
rather than ½ as with classical particles
Fortunately, we show that with n bosons and mkn2 modes,
the probability of a collision is still at most (say) ½
Experimental Prospects
What would it take to implement
the requisite experiment?
• Reliable phase-shifters and
beamsplitters, to implement an
arbitrary unitary on m photon
modes
• Reliable single-photon sources
• Photodetector arrays that can
reliably distinguish 0 vs. 1 photon
But crucially, no nonlinear optics
or postselected measurements!
Our Proposal:
Concentrate on (say)
n=20 photons and
m=400 modes, so that
classical simulation is
nontrivial but not
impossible
Main Open Problems
Prove the Permanent of Gaussians Conjecture! (That
approximating the permanent of an N(0,1) Gaussian random
matrix is #P-complete)
Prove the Permanent Concentration Conjecture! (That
Pr[|Per(X)|2<n!/p(n)] < 1/q(n))
Can our linear-optics model solve classically-intractable
decision problems?
Are there other (e.g., qubit-based) quantum systems for
which approximate classical simulation would collapse PH?
Do our linear-optics experiment!