Another version - Scott Aaronson

Download Report

Transcript Another version - Scott Aaronson

BosonSampling
Scott Aaronson (MIT)
ICMP 2015, Santiago, Chile
Based mostly on joint work with Alex Arkhipov
What This Talk
Won’t Have

What It Will Have
P#P
Oracle for Counting
Problems

z
PH
Constant Number
of NP Quantifiers
NP
Efficiently Checkable
Problems
P
Efficiently
Solvable
Problems
The Extended ChurchTuring Thesis (ECT)
Everything feasibly
computable in the physical
world is feasibly computable
by a (probabilistic) Turing
machine
Shor’s Theorem: QUANTUM SIMULATION has no efficient
classical algorithm, unless FACTORING does also
So the ECT is false … what more
evidence could anyone want?
Building a QC able to factor large numbers is hard!
After 20 years, no fundamental obstacle has been
found, but who knows?
Can’t we “meet the physicists halfway,” and show
computational hardness for quantum systems closer to
what they actually work with now?
FACTORING might be have a fast classical algorithm! At
any rate, it’s an extremely “special” problem
Wouldn’t it be great to show that if, quantum computers
can be simulated classically, then (say) P=NP?
Our Starting Point
Det  A 
 1


sgn 
S n
n
a  
i,
i 1
i
Per  A 
n
a  


S n i 1
i,
All I can say is, the bosons
In
#P-complete [Valiant]
gotPthe harder job
FERMIONS
BOSONS
i
So if n-boson amplitudes correspond to permanents…
Can We Use Bosons to Calculate
the Permanent?
That sounds way too good to be true—it would let us
solve NP-complete problems and more using QC!
Explanation: Amplitudes aren’t directly observable.
To get a reasonable estimate of Per(A), you might
need to repeat the experiment exponentially many
times
Basic Result: Suppose there were a polynomial-time
classical randomized algorithm that took as input a
description of a noninteracting-boson experiment, and
that output a sample from the correct final distribution
over n-boson states.
Then P#P=BPPNP and the polynomial hierarchy collapses.
Motivation: Compared to (say) Shor’s algorithm, we
get “stronger” evidence that a “weaker” system can
do interesting quantum computations
Related Work
Valiant 2001, Terhal-DiVincenzo 2002, “folklore”:
A QC built of noninteracting fermions can be efficiently
simulated by a classical computer
Knill, Laflamme, Milburn 2001: Noninteracting bosons
plus adaptive measurements yield universal QC
Jerrum-Sinclair-Vigoda 2001: Fast classical randomized
algorithm to approximate Per(A) for nonnegative A
Bremner-Jozsa-Shepherd 2011 (independent of us):
Analogous hardness results for simulating “commuting
Hamiltonian” quantum computers
The Quantum Optics Model
A rudimentary subset of quantum computing, involving
only non-interacting bosons, and not based on qubits
Classical counterpart: Galton’s Board, on display at
many science museums
Using only pegs and noninteracting balls, you probably
can’t build a universal computer—
but you can do some interesting
computations, like generating the
binomial distribution!
The Quantum Version
Let’s replace the balls by identical single photons,
and the pegs by beamsplitters
Then we see strange things like the Hong-Ou-Mandel dip






1
2
1
2
1 

2 
1 


2
The two photons are
now correlated, even
though they never
interacted!
Explanation involves destructive
1  1  1  1 



0
interference of amplitudes:
2
2
2 2
Final amplitude of non-collision is
Getting Formal
The basis states have the form |S=|s1,…,sm, where
si is the number of photons in the ith “mode”
We’ll never create or destroy photons. So s1+…+sm=n
is constant.
For us, m=nO(1)
Initial state:
|I=|1,…,1,0,……,0 
U
You get to apply any mm unitary matrix U—say, using
a collection of 2-mode beamsplitters
 m  n  1

M : 
n 

In general, there are
ways to
distribute n identical photons into m modes
U induces an MM unitary (U) on the n-photon
states as follows:
Per U
 U S ,T 

S ,T

s1! sm !t1! t m !
Here US,T is an nn submatrix of U (possibly with repeated
rows and columns), obtained by taking si copies of the ith
row of U and tj copies of the jth column for all i,j
Beautiful Alternate Perspective
The “state” of our computer, at any time, is a degree-n
polynomial over the variables x=(x1,…,xm) (n<<m)
Initial state: p(x) := x1xn
We can apply any mm unitary transformation U to x,
to obtain a new degree-n polynomial
p' x   pUx 
 x



S  s1 ,, sm
s1
S 1
sm
m
x
s1
1
Then on “measuring,” we see the monomial x  x
2
with probability
 S s1! sm !
sm
m
OK, so why is it hard to sample the
distribution over photon numbers classically?
Given any matrix ACnn, we can construct an mm
unitary U (where m2n) as follows:
 A B 

U  
 C D
Suppose we start with |I=|1,…,1,0,…,0 (one photon
in each of the first n modes), apply U, and measure.
Then the probability of observing |I again is
p : I  U  I
2

2n
Per  A
2
Claim 1: p is #P-complete to
estimate (up to a constant factor)
Idea: Valiant proved that the
PERMANENT is #P-complete.
Can use a classical reduction
to go from a multiplicative
approximation of |Per(A)|2
to Per(A) itself.
 
Claim 2: Suppose we had a
fast classical algorithm for
boson sampling. Then we
could estimate p in BPPNP
Idea: Let M be our classical
sampling algorithm, and let r
be its randomness. Use
approximate counting to
estimate Pr M r  outputs I
2
r

 
Conclusion:
p : Suppose
I  U weI had a fast
 classical
Per Aalgorithm
for boson sampling. Then P#P=BPPNP.
2n
2

The Elephant in the Room
The previous result hinged on the difficulty of
estimating a single, exponentially-small probability
p—but what about noise and error?
The “right” question: can a classical computer
efficiently sample a distribution with 1/nO(1) variation
distance from the boson distribution?
Our Main Result: Suppose it can. Then there’s a BPPNP
algorithm to estimate |Per(A)|2, with high probability
nn
over a Gaussian matrix
 C
A ~ N 0 ,1
Our Main Conjecture
Estimating |Per(A)|2, with high probability
over i.i.d. Gaussian A, is a #P-hard problem
If this conjecture holds, then even a noisy n-photon
experiment could falsify the Extended Church Thesis,
assuming P#PBPPNP!
Much of our work is devoted to giving evidence for
this conjecture
What makes the Gaussian ensemble special?
Theorem: It arises by considering sufficiently small
submatrices of Haar-random unitary matrices.
“Easier” problem: Just show that, if A is an i.i.d.
Gaussian matrix, then |Per(A)|2 is approximately a
lognormal random variable (as numerics suggest), and
not so concentrated around 0 as to preclude its
being hard to estimate
Can prove for determinant in place of permanent.
For permanent, best known anti-concentration
results [Tao-Vu] are not yet strong enough for us
Can calculate E[|Per(A)|2]=n! and
E[|Per(A)|4]=(n+1)(n!)2, but not strong enough to
imply anti-concentration result
BosonSampling Experiments
In 2012, groups in Brisbane,
Oxford, Rome, and Vienna
reported the first 3-photon
BosonSampling experiments,
confirming that the amplitudes
were given by 3x3 permanents
# of experiments > # of photons!
Challenges for Scaling Up:
- Reliable single-photon sources (optical multiplexing?)
- Minimizing losses
- Getting high probability of n-photon coincidence
Goal (in our view): Scale to 10-30 photons
Don’t want to scale much beyond that—both because
(1) you probably can’t without fault-tolerance, and
(2) a classical computer probably couldn’t even verify
the results!
Scattershot BosonSampling
Exciting recent idea, proposed by Steve Kolthammer and
others, for sampling a hard distribution even with highly
unreliable (but heralded) photon sources, like SPDCs
The idea: Say you have 100 sources, of which only 10
(on average) generate a photon. Then just detect which
sources succeed, and use those to define your
BosonSampling instance!
Complexity analysis turns out to go through essentially
without change
Using Quantum Optics to Prove
that the Permanent is #P-Complete
[A., Proc. Roy. Soc. 2011]
Valiant showed that the permanent is #P-complete—but
his proof required strange, custom-made gadgets
We gave a new, arguably more transparent proof by
combining three facts:
(1) n-photon amplitudes correspond to nn permanents
(2) Postselected quantum optics can simulate universal
quantum computation [Knill-Laflamme-Milburn 2001]
(3) Quantum computations can encode #P-complete
quantities in their amplitudes
Can BosonSampling Solve NonSampling Problems?
(Could it even have cryptographic applications?)
Idea: What if we could “smuggle” a matrix A with huge
permanent, as a submatrix of a larger unitary matrix U?
Finding A could be hard classically, but shooting photons
into an interferometer network would easily reveal it
Pessimistic Conjecture: If U is unitary and
|Per(U)|1/nO(1), then U is “close” to a permuted
diagonal matrix—so it “sticks out like a sore thumb”
A.-Nguyen, Israel J. Math 2014: Proof of a weaker
version of the pessimistic conjecture, using inverse
Littlewood-Offord theory
BosonSampling with Lost Photons
Suppose we have n+k photons in the initial state, but k
are randomly lost. Then the probability of each output
has the form
Per  A 



S  1,, n  k
S n
S
2
A ~ N 0,1
n n  k 
What can we say about these quantities? Are they also
(plausibly) #P-hard to approximate?
Work in progress with Daniel Brod
Summary
Intuition suggests that not merely quantum computers,
but many natural quantum systems, should be
intractable to simulate on classical computers, because
of the exponentiality of the wavefunction
BosonSampling provides a clear example of how we can
formalize this intuition—or at least, base it on “standard”
conjectures in theoretical computer science.
It’s also brought QC theory into closer contact with
experiment. And it’s highlighted the remarkable
connection between bosons and the matrix permanent.
Future progress may depend on solving hard open
problems about the permanent
Bonus: Rise and Fall of “Complexity”
Sean Carroll’s example:
But how to quantify? One simpleminded measure:
apparent complexity.
The Kolmogorov complexity (estimated, say, by GZIP file
size) of a coarse-grained (de-noised) description of our
thermodynamic mixing process. Does it rise and then fall?
The Coffee Automaton
A., Carroll, Mohan, Ouellette, Werness 2015: A probabilistic
nn reversible system that starts half “coffee” and half
“cream.” At each time step, we randomly “shear” half the
coffee cup horizontally or vertically (assuming a toroidal cup)
Compressed File Size
We prove that the apparent
complexity of this image has
a rising-falling pattern, with a
maximum of at least ~n1/6
500
450
400
350
300
250
200
150
100
50
0
-100
100
300
500
Time Steps
700
900