Transcript ppt
On the energy landscape of 3D spin
Hamiltonians with topological order
Sergey Bravyi
(IBM Research)
Jeongwan Haah (Caltech)
Phys.Rev.Lett. 107, 150504 (2011)
and arXiv:1112.????
QEC 2011
December 6, 2011
Main goal:
Store a quantum state reliably for a macroscopic time
in a presence of hardware imperfections and thermal noise
without active error correction.
Towards topological self-correcting memories
Robust against small imperfections
2D toric code [Kitaev 97]
Constant threshold with active EC
[Dennis et al 2001]
No-go result for the thermal noise
[Alicki, Fannes, Horodecki 2008]
No-go result for some 3D stabilizer
codes [Yoshida 2011]
No-go result for all 2D stabilizer code
[S.B. and Terhal 2008]
Most promising ideas:
?
Add one extra dimension to our space-time:
[Alicki, Horodecki3 2008]
2D + long range anyon-anyon interactions
[Chesi et al 2009, Hamma et al 2009 ]
3D topological quantum spin glasses
[Chamon 2005, Haah 2011, this work]
Outline
• Encoding, storage, and decoding for memory
Hamiltonians based on stabilizer codes
• Memory time of the 3D Cubic Code:
rigorous lower bound and numerical simulation
• Topological quantum order, string-like logical
operators, and the no-strings rule
• Logarithmic energy barrier for uncorrectable errors
Memory Hamiltonians based on stabilizer codes
Qubits live at sites of a 2D or 3D lattice. O(1) qubits per site.
Hamiltonian = sum of local commuting Pauli stabilizers
energy
3
2
Excited states with
m=1,2,3… defects
1
0
[N,k,d] error correcting code
Distance d≈ L
Example: 3D Cubic Code [Haah 2011 ]
2 qubits per site, 2 stabilizers per cube
IZ
ZI
ZZ
II
IZ
IX
ZI
XI
Each stabilizer acts on 8 qubits
II
XX
IZ
ZI
XI
IX
IX
XI
Stabilizer code Hamiltonians with TQO: previous work
• 2D toric code and surface codes [Kitaev 97]
• 2D surface codes with twists [Bombin 2010]
• 2D topological color codes [Bombin and Martin-Delgado 2006]
• 3D toric code [Castelnovo, Chamon 2007]
• 3D topological spin glass model [Chamon 2005]
• 3D models with membrane condensation [Hamma,Zanardi, Wen 2004]
Bombin, Martin-Delgado 2007]
• 4D toric code [Alicki, Horodecki3]
The only example of
quantum self-correction
Storage: Markovian master equation
Must be local, trace preserving, completely positive
Evolution starts from a ground state of H.
Lindblad operators Lk act on O(1) qubits and have
norm O(1).
Each qubit is acted on by O(1) Lindblad operators.
Davies weak coupling limit
Heat bath
Memory system
Lindblad operator
transfers energy
the system to the bath (quantum jump).
The spectral density
from
obeys detailed balance:
Decoding
Syndrome measurement: perform non-destructive eigenvalue
measurement for each stabilizer Ga.
A list of all measured eigenvalues is called a syndrome.
Measured
syndrome
Error correction
algorithm
Correcting
Pauli operator
The net action of the decoder:
is the projector onto the subspace with syndrome s
Defect = spatial location of a violated stabilizer,
Defect diagrams will be used to represent syndromes.
Example:
2D surface code:
Z
X
X
Z
1 2
4 3
1 2
4 3
X-error
Z-error
Creates defects at
squares 1,3
Creates defects at
squares 2,4
decoder’s task is to annihilate the defects in a way which is
most likely to return the memory to its original state.
Renormalization Group (RG) decoder*
Measured syndrome
1. Find connected defect clusters
1 2
3
4
5
2. For each connected cluster C
Find the minimum enclosing box b(C).
Try to annihilate C by a Pauli operator
acting inside b(C). Record the
annihilation operator.
3. Stop if no defects are left.
4. Increase unit of length by factor 2.
5. Go to the first step
*J.
Harrington, PhD thesis (2004), Duclos-Cianci and Poulin (2009)
RG decoder
Syndrome after the 1st iteration
1. Find connected defect clusters
1
2. For each connected cluster C
2
Find the minimum enclosing box b(C).
Try to annihilate C by a Pauli operator
acting inside b(C). Record the
annihilation operator.
3. Stop if no defects are left.
RG decoder
The decoder stops whenever all defects have been
annihilated, or when the unit of length reached the lattice size.
The correcting operator
is chosen as the product
of all recorded annihilation operators.
Failure 1: decoder has reached the maximum unit of length,
but some defects are left.
Failure 2: all defects have been annihilated but the
correcting operator does not return the system
to the original state.
RG decoder can be implemented in time poly(L)
Main goal for this talk:
Derive an upper bound on the worst-case storage error:
Initial
ground
state
RG
decoder
Lindblad
evolution
Theorem 1
The storage error of the 3D Cubic Code decays polynomially with
the lattice size L. Degree of the polynomial is proportional to β :
However, the lattice size cannot be too large:
If we are willing to tolerate error ε then the memory time is at least
Optimal memory time at a fixed temperature is exponential in β2
The theorem only provides a lower bound
on the memory time. Is this bound tight ?
Monte-Carlo simulation
probability of the successful decoding on the
time-evolved state at time t.
We observed the exponential decay:
Numerical estimate the memory time:
log(memory time) vs linear lattice size for the 3D Cubic Code
β=5.25
β=5.1
β=4.9
β=4.7
β=4.5
β=4.3
Exponent in
the power law
as function of β
Optimal
lattice size:
log(L*)
as function of β
Each data point = 400 Monte Carlo samples with fixed L and β
1,000 CPU-days on Blue Gene P
Numerical test of the scaling
Main theorem: sketch of the proof
Some terminology
An error path implementing a Pauli operator P is a finite
sequence of single-qubit Pauli errors whose combined action
coincides with P.
Energy cost = maximum number of defects along the path.
P1
P2
Pt
vacuum
Energy barrier of a Pauli operator P is the smallest integer
m such that P can be implemented by an error path with
energy cost m
Basic intuition behind self-correction:
The thermal noise is likely to generate only errors with a
small energy barrier. Decoder must be able to correct them.
Errors with high energy barrier can potentially confuse the
decoder. However, such errors are not likely to appear.
Lemma (storage error)
Suppose the decoder corrects all errors whose energy barrier is
smaller than m. Then for any constant 0<a<1 one has
= # physical qubits
= # logical qubits
Boltzmann
factor
Entropy
factor
Suppose we choose
and
Then the entropy factor can be neglected:
In order to have a non-trivial bound, we need at least
logarithmic energy barrier for all uncorrectable errors:
More terminology [Haah 2011]
A logical string segment is a Pauli operator whose action
on the vacuum creates two well-separated clusters of defects.
vacuum
The smallest cubic boxes enclosing the two clusters of
defects are called anchors
More terminology
A logical string segment is trivial iff its action on the vacuum
can be reproduced by operators localized near the anchors:
vacuum
No-strings rule:
There exist a constant α such that any logical string
segment with aspect ratio > α is trivial.
Aspect ratio =
Distance between the anchors
Size of the acnhors
3D Cubic Code obeys the no-strings rule with α=15 [Haah 2011]
No 2D stabilizer code obeys the no-strings rule [S.B., Terhal 09]
Theorem 2
Consider any topological stabilizer code Hamiltonian on a Ddimensional lattice of linear size L. Suppose the code has TQO and
obeys the no-strings rule with some constant α. Then the RG decoder
corrects any error with the energy barrier at most c log(L).
The constant c depends only on α and D.
Haah’s 3D Cubic Code: α=15.
Recall that errors with energy barrier >clog(L) are exponentially
suppressed due to the Boltzmann factor. We have shown that
Sketch of the proof: logarithmic lower bound on
the energy barrier of logical operators
Idea 1: No-strings rule implies `localization’ of errors
A stream of single-qubit errors:
E1
S
E2
S1
Accumulated error: E= E1 E2 · · · E100
E3
E100
···
S2
S’
could be very non-local
Suppose however that all intermediate syndromes are sparse:
the distance between any pair of defects is >>α.
A stream of local errors cannot move isolated topologically charged defects
more than distance α away (the no-strings rule).
Localization: E=Eloc · S where S is a stabilizer and Eloc
is supported on the α-neighborhood of S and S’
Idea 1: No-strings rule implies `localization’ of errors
A stream of single-qubit errors:
E1
S
E2
S1
Accumulated error: E= E1 E2 · · · E100
E3
E100
···
S2
could be very non-local
Localization: E=Eloc · S where S is a stabilizer and Eloc
is supported on the α-neighborhood of S and S’
In order for the accumulated error to have a large
weight at least one of the intermediate syndromes
must be non-sparse (dense)
S’
Idea 2: scale invariance and RG methods
A stream of local errors cannot move an isolated charged cluster of
defects of size R by distance more than αR away.
In order for the accumulated error to have a large weight at least one
of the intermediate syndromes must be non-sparse (dense)
1. Define sparseness and denseness at different spatial scales.
2. Show that in order for the accumulated error to have a
REALLY large weight (of order L), at least one intermediate
syndrome must be dense at roughly log(L) spatial scales.
3. Show that a syndrome which is dense at all spatial scales
must contain at least clog(L) defects.
Definition: a syndrome S is called sparse at level p if it can
be partitioned into disjoint clusters of defects such that
1. Each cluster has diameter at most r(p)=(10 α)p,
2. Any pair of clusters merged together has diameter
greater than r(p+1)
Otherwise, a syndrome is called dense at level p.
Lemma (Dense syndromes are expensive)
Suppose a syndrome S is dense at all levels 0,…,p.
Then S contains at least p+2 defects.
e
e
sparse
e
e
e
0
1
2
3
4
p
Renormalization group method
We are given an error path implementing a logical operator P
which maps a ground state to an orthogonal ground state.
Record intermediate syndrome after each step in the path.
RG level
It defines level-0 syndrome history:
time
0 = vacuum, S = sparse syndromes, D= dense syndromes
Level-0 syndrome history. Consecutive syndromes are related by single-qubit
errors. Some syndromes are sparse (S), some syndromes are dense (D).
RG level
Renormalization group method
time
0 = vacuum, S = sparse syndromes, D= dense syndromes
Level-1 syndrome history includes only dense syndromes at level-0.
Level-1 errors connect consecutive syndromes at level-0.
RG level
Renormalization group method
time
0 = vacuum, S = sparse syndromes, D= dense syndromes
Level-1 syndrome history includes only dense syndromes at level-0.
Level-1 errors connect consecutive syndromes at level-0.
Use level-1 sparsity to label level-1 syndromes as sparse and dense.
RG level
Renormalization group method
time
0 = vacuum, S = sparse states, D= dense states
Level-2 syndrome history includes only dense syndromes at level-1.
Level-2 errors connect consecutive syndromes at level-1.
RG level
Renormalization group method
time
0 = vacuum, S = sparse states, D= dense states
Level-2 syndrome history includes only dense excited syndromes at level-1.
Level-2 errors connect consecutive syndromes at level-1.
Use level-2 sparsity to label level-2 syndromes as sparse and dense.
Renormalization group method
RG level
pmax
time
0 = vacuum, S = sparse syndromes, D= dense syndromes
At the highest RG level the syndrome history has no intermediate syndromes.
A single error at the level pmax implements a logical operator
Key technical result: Localization of level-p errors
RG level
pmax
time
No-strings rule can be used to `localize’ level-p errors by
multiplying them by stabilizers.
Localized level-p errors connecting syndromes S and S’
act on r(p)-neighborhood of S and S’.
Localization of level-p errors
RG level
pmax
time
TQO implies that r(pmax) > L since any logical operator must
be very non-local. Therefore pmax is at least log(L).
At least one syndrome must be dense at all levels. Such
syndrome must contain at least log(L) defects.
Conclusions
The 3D Cubic Code Hamiltonian provides the first example
of a (partially) quantum self-correcting memory.
Memory time of the encoded qubit(s) grows polynomially with
the lattice size. The degree of the polynomial is proportional
to the inverse temperature β.
The lattice size cannot be too big: L< L* ≈ exp(β).
For a fixed temperature the optimal memory time is roughly
exp(β2)