CO 781: Topics in Quantum Information

Debbie Leung

Estimated study time: 34 minutes

Table of contents

These notes synthesize material from two offerings of CO 781 (Topics in Quantum Information) by Debbie Leung at the University of Waterloo, both centred on quantum error correction and fault-tolerant quantum computation. The exposition draws on the standard public literature: Nielsen and Chuang’s Quantum Computation and Quantum Information (Cambridge UP, Chapter 10); Daniel Gottesman’s pedagogical survey An Introduction to Quantum Error Correction and Fault-Tolerant Quantum Computation (arXiv:0904.2557) and his PhD thesis Stabilizer Codes and Quantum Error Correction (arXiv:quant-ph/9705052); Kitaev’s foundational paper Fault-tolerant quantum computation by anyons (arXiv:quant-ph/9707021); Fowler, Mariantoni, Martinis and Cleland, Surface codes: Towards practical large-scale quantum computation (Phys. Rev. A 86; arXiv:1208.0928); Terhal’s review Quantum error correction for quantum memories (Rev. Mod. Phys. 87; arXiv:1302.3428); and Watrous, The Theory of Quantum Information (Cambridge UP; public draft). Supplementary material was located through public lecture notes available online.


Chapter 1: Quantum Error Correction

Taught at UW as CO 781 in Winter 2022 and Winter 2024 by Debbie Leung.

The story of quantum error correction begins with a paradox. A qubit lives in a continuous state space, and unlike a classical bit it cannot be copied; the no-cloning theorem forbids the redundancy on which classical error correction is built. Errors on quantum systems form a continuum and include not only bit flips but also relative-phase errors and arbitrary unitary perturbations. For some years after the discovery of Shor’s algorithm it was widely believed that this combination of fragility and constraint placed quantum information beyond the reach of any practical error correction. The discovery in 1995 of the nine-qubit code by Peter Shor, followed within months by the seven-qubit code of Steane and the systematic stabilizer formalism of Gottesman, dispelled that pessimism. The aim of this chapter is to develop the core ideas — the noise model, the Knill–Laflamme conditions, the first explicit codes, the stabilizer formalism with its CSS subfamily, syndrome extraction and decoding, and finally the topological codes which dominate contemporary engineering.

1.1 Noise Models for Quantum Information

Quantum noise is described in the language of completely positive trace-preserving (CPTP) maps. Any physical evolution of a system \(A\) interacting with an environment \(E\), followed by tracing out \(E\), has a Kraus representation

\[ \mathcal{N}(\rho) = \sum_k E_k \rho E_k^{\dagger}, \qquad \sum_k E_k^{\dagger} E_k = I. \]

For qubit systems the Pauli operators \(\left\{I, X, Y, Z\right\}\) span the operator algebra and provide a discrete basis in which to expand the Kraus operators of an arbitrary \(\mathcal{N}\). It is therefore not a loss of generality to design codes that correct only the Pauli errors \(\left\{X_j, Y_j, Z_j\right\}\) on each qubit \(j\); the linearity of quantum mechanics promotes correction of these basis errors to correction of any error in their span.

Definition 1.1 (Pauli channel). The single-qubit Pauli channel with parameters \(p_X, p_Y, p_Z \ge 0\) and \(p_I = 1 - p_X - p_Y - p_Z\) is \[ \mathcal{N}(\rho) = p_I\, \rho + p_X\, X\rho X + p_Y\, Y\rho Y + p_Z\, Z\rho Z. \]

The depolarizing channel with strength \(p\) corresponds to \(p_X = p_Y = p_Z = p/3\), so that \(\mathcal{N}(\rho) = (1-p)\rho + (p/3)(X\rho X + Y\rho Y + Z\rho Z)\).

For \(n\) qubits the noise is most often modelled as independent and identically distributed: the global channel acts as \(\mathcal{N}^{\otimes n}\). This i.i.d. assumption is what gives meaning to the notion of a code’s distance: an error of weight \(w\) (a tensor product of non-identity Paulis on \(w\) qubits and identities elsewhere) occurs with probability \(O(p^w)\), so suppressing all errors of weight \(\le t\) suppresses the residual logical error to \(O(p^{t+1})\).

Remark 1.2 (Discretization). Although physical errors form a continuum, only the discrete Pauli basis need be corrected. If \(E = \sum_a c_a P_a\) is an arbitrary error written in the Pauli basis and a code corrects each \(P_a\) in some set, then the error \(E\) is corrected by the same recovery, with the syndrome measurement collapsing the superposition onto one of the Paulis. This digitization of errors is the deepest reason that finite-dimensional quantum error correction is even possible.

1.2 The Knill–Laflamme Conditions

A quantum code \(\mathcal{C} \subseteq (\mathbb{C}^2)^{\otimes n}\) is a subspace into which a smaller logical Hilbert space is embedded. We write \(P_{\mathcal{C}}\) for the projector onto \(\mathcal{C}\). Let \(\mathcal{E} = \left\{E_a\right\}\) be a set of error operators we wish to correct.

Theorem 1.3 (Knill--Laflamme conditions). The errors \(\mathcal{E}\) are correctable on \(\mathcal{C}\) if and only if there exists a Hermitian matrix \(\left[c_{ab}\right]\) with \[ P_{\mathcal{C}}\, E_a^{\dagger} E_b\, P_{\mathcal{C}} = c_{ab}\, P_{\mathcal{C}} \]

for all \(a, b\). The code is non-degenerate when \(\left[c_{ab}\right]\) has full rank and degenerate when it does not.

The condition has a transparent meaning. Distinct errors must take the codespace into mutually orthogonal subspaces — only then can a syndrome measurement distinguish them — but they may also act identically on the codespace, in which case they do not need to be told apart at all. The latter possibility, which has no classical analogue, is the phenomenon of degeneracy. Shor’s nine-qubit code is degenerate against single-qubit \(Z\) errors precisely because pairs of \(Z\) errors within a triple act trivially on the encoded state.

Definition 1.4 (Distance). The distance \(d\) of a code is the smallest weight of a Pauli operator that maps the codespace into itself non-trivially (i.e. acts as a non-scalar on \(\mathcal{C}\)). A code with parameters \(\left[\!\left[n,k,d\right]\!\right]\) encodes \(k\) logical qubits into \(n\) physical qubits with distance \(d\), and corrects all errors of weight at most \(t = \lfloor (d-1)/2 \rfloor\).

1.3 The Nine-Qubit Shor Code

Shor’s construction of 1995 was the first explicit demonstration that arbitrary single-qubit errors could be corrected. The idea is to concatenate two classical repetition codes — one for bit flips and one for phase flips — using nine physical qubits per logical qubit.

Example 1.5 (Shor's \(\left[\!\left[9,1,3\right]\!\right]\) code). The logical states are \[ |0_L\rangle = \frac{1}{2\sqrt{2}}\bigl(|000\rangle+|111\rangle\bigr)\bigl(|000\rangle+|111\rangle\bigr)\bigl(|000\rangle+|111\rangle\bigr), \]\[ |1_L\rangle = \frac{1}{2\sqrt{2}}\bigl(|000\rangle-|111\rangle\bigr)\bigl(|000\rangle-|111\rangle\bigr)\bigl(|000\rangle-|111\rangle\bigr). \]

Within each block of three the code is a phase-flip repetition code (after Hadamard transformation, a bit-flip code), and the three blocks together form the outer bit-flip code. A single \(X\) error inside a block is detected by comparing parities of pairs within that block; a single \(Z\) error is detected by comparing the relative signs of the three blocks.

The Shor code already exhibits all the features that pervade the subject: redundant physical qubits encoding a smaller logical space, syndrome extraction by parity measurements that do not disturb the encoded information, and degeneracy in the form of phase errors within a block.

1.4 The Stabilizer Formalism

Shor’s code, the Steane code, and most other useful quantum codes are stabilizer codes, a unifying framework introduced by Gottesman in his 1997 thesis. The Pauli group on \(n\) qubits is

\[ \mathcal{P}_n = \left\{ \alpha\, P_1 \otimes \cdots \otimes P_n : \alpha \in \left\{\pm 1, \pm i\right\},\ P_j \in \left\{I,X,Y,Z\right\} \right\}. \]
Definition 1.6 (Stabilizer code). A stabilizer \(\mathcal{S}\) is an Abelian subgroup of \(\mathcal{P}_n\) not containing \(-I\). The associated stabilizer code is the simultaneous \(+1\) eigenspace \[ \mathcal{C}(\mathcal{S}) = \left\{ |\psi\rangle : g|\psi\rangle = |\psi\rangle\ \forall g \in \mathcal{S} \right\}. \]

If \(\mathcal{S}\) has \(n-k\) independent generators, then \(\dim \mathcal{C}(\mathcal{S}) = 2^k\).

The logical operators of the code are elements of the normalizer \(N(\mathcal{S})\) of \(\mathcal{S}\) in \(\mathcal{P}_n\), modulo \(\mathcal{S}\) itself. Distance is the minimum weight over \(N(\mathcal{S}) \setminus \mathcal{S}\). Error correction proceeds by measuring each generator \(g_i\); the measurement is non-destructive on the codespace, but reveals a binary syndrome equal to the eigenvalue of \(g_i\) on the corrupted state. The collection of syndromes localizes which Pauli error has occurred up to multiplication by stabilizers.

Remark 1.7 (Symplectic representation). Modulo overall phases, an \(n\)-qubit Pauli is represented by a binary vector \((x|z) \in \mathbb{F}_2^{2n}\), where the \(j\)-th \(X\)-bit indicates an \(X\) factor and the \(j\)-th \(Z\)-bit a \(Z\) factor (a \(Y\) corresponds to both). Two Paulis commute iff their representatives are orthogonal under the symplectic form \(\omega((x|z),(x'|z')) = x \cdot z' + x' \cdot z \pmod 2\). A stabilizer code is therefore an isotropic subspace of \(\mathbb{F}_2^{2n}\), and the algebraic theory of quantum codes reduces to symplectic linear algebra over \(\mathbb{F}_2\).

1.5 CSS Codes and the Steane Code

A particularly clean subclass arises when the stabilizer can be split into pure-\(X\) and pure-\(Z\) generators.

Definition 1.8 (CSS code). Given two classical binary linear codes \(C_1, C_2 \subseteq \mathbb{F}_2^n\) with \(C_2 \subseteq C_1\), the CSS code \(\mathrm{CSS}(C_1, C_2)\) has stabilizer generated by \(X\)-type operators corresponding to a basis of \(C_2^{\perp}\) and \(Z\)-type operators corresponding to a basis of \(C_1^{\perp}\). Its parameters are \(\left[\!\left[n, \dim C_1 - \dim C_2, d\right]\!\right]\) where \(d\) is the minimum weight of a vector in \(\bigl(C_1 \setminus C_2\bigr) \cup \bigl(C_2^{\perp} \setminus C_1^{\perp}\bigr)\).

CSS codes correct \(X\) and \(Z\) errors independently using the syndrome of the underlying classical codes. Steane’s seven-qubit code is the CSS code obtained from the \(\left[7,4,3\right]\) Hamming code paired with itself; it has parameters \(\left[\!\left[7,1,3\right]\!\right]\) and is the smallest distance-three CSS code.

Example 1.9 (Steane \(\left[\!\left[7,1,3\right]\!\right]\)). With the parity-check matrix of the \(\left[7,4,3\right]\) Hamming code \[ H = \begin{pmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{pmatrix}, \]

the stabilizer of the Steane code consists of three \(X\)-type generators with support pattern given by the rows of \(H\) and three \(Z\)-type generators with the same supports. The logical \(\overline{X}\) and \(\overline{Z}\) operators act as \(X^{\otimes 7}\) and \(Z^{\otimes 7}\) respectively; both have weight seven, but lower-weight representatives exist within the cosets, all of weight at least three.

The Steane code’s structure makes the entire Clifford group transversal — applying \(H\) bitwise on all seven qubits implements logical \(\overline{H}\), and similarly for \(\overline{S}\) and \(\overline{\mathrm{CNOT}}\). This is the source of its enduring popularity in pedagogical and theoretical work.

1.6 Syndrome Measurement and Decoding

Extracting a syndrome means measuring each stabilizer generator without learning anything about the logical state. The standard circuit uses an ancilla qubit prepared in \(|+\rangle\), a sequence of controlled Paulis from the ancilla to each qubit in the support of the generator, and a final \(X\)-basis measurement. The ancilla’s outcome is the eigenvalue of the generator on the data.

Definition 1.10 (Decoder). A decoder for a stabilizer code is an algorithm that takes a measured syndrome \(s \in \mathbb{F}_2^{n-k}\) and returns a Pauli correction \(R(s) \in \mathcal{P}_n\) such that, for the most likely error \(E\) producing syndrome \(s\), the product \(R(s)E\) lies in the stabilizer.

Optimal (maximum-likelihood) decoding is generally hard. For the surface code it is equivalent to a minimum-weight perfect matching problem on a graph whose vertices are syndrome bits, solvable in polynomial time by Edmonds’ blossom algorithm. For more general codes one uses belief propagation, neural decoders, or tensor-network contraction; the choice of decoder is now a major engineering issue in quantum hardware.

Remark 1.11 (Measurement noise). A realistic syndrome measurement is itself faulty — the ancilla qubit may suffer errors during the parity readout, producing a wrong syndrome bit. Robust schemes (Shor-style cat-state ancillas, Steane-style code-state ancillas, Knill-style teleportation, and modern flag qubits) repeat the measurement enough times to detect such faults. This is one of the main subjects of Chapter 2.

1.7 Topological Codes: the Toric and Surface Codes

In 1997 Kitaev introduced a class of stabilizer codes whose generators are geometrically local — they act only on small clusters of nearby qubits arranged on a two-dimensional lattice. These topological codes have proved extraordinarily well-suited to physical implementation.

Definition 1.12 (Toric code). Place qubits on the edges of an \(L \times L\) square lattice with periodic (torus) boundary conditions. For each vertex \(v\) define the star operator \(A_v = \prod_{e \ni v} X_e\), and for each face \(f\) the plaquette operator \(B_f = \prod_{e \in \partial f} Z_e\). The stabilizer is generated by all \(A_v\) and \(B_f\); the resulting code has parameters \(\left[\!\left[2L^2, 2, L\right]\!\right]\).

The two logical qubits are associated with the two non-contractible cycles of the torus: a logical \(\overline{Z}\) is a string of \(Z\)’s along a horizontal cycle, a logical \(\overline{X}\) a string of \(X\)’s along a perpendicular vertical cycle of the dual lattice. Errors create pairs of anyon excitations, and decoding is the problem of pairing them up by the shortest paths.

Example 1.13 (Surface code). Removing the toric topology and placing the lattice on a square with smooth and rough boundaries produces the planar surface code, with parameters \(\left[\!\left[d^2 + (d-1)^2, 1, d\right]\!\right]\). Each stabilizer generator acts on at most four qubits; all generators have local supports; and the threshold under independent depolarizing noise with perfect syndromes is approximately \(p_{\mathrm{th}} \approx 10.9\%\) (Dennis--Kitaev--Landahl--Preskill, 2002), with realistic circuit-level thresholds around \(0.5\)--\(1\%\) (Fowler--Mariantoni--Martinis--Cleland, 2012).
Theorem 1.14 (Topological protection). No local operator, that is, no Pauli supported on a region of diameter strictly less than \(L\), can act non-trivially on the codespace of the toric code. Logical operators are necessarily of weight \(\Omega(L)\) and stretch across the entire system.

The non-locality of logical operators is the geometric expression of the code’s distance, and gives the surface code its characteristic robustness: only correlated errors spanning the whole device can produce a logical fault, and such correlations are exponentially suppressed in any reasonable noise model.

Remark 1.15 (Why surface codes dominate). Three features make the surface code the platform of choice for current superconducting and ion-trap engineering: locality of stabilizer measurements (only nearest-neighbour two-qubit gates are required), high threshold under depolarizing noise, and efficient minimum-weight perfect-matching decoders. The price is a relatively poor encoding rate of \(k/n = O(1/d^2)\), which is the focus of intensive current research on quantum LDPC codes that improve on the surface code's overhead while retaining its locality and decodability advantages.

Chapter 2: Fault-Tolerant Quantum Computation

Taught at UW as CO 781 in Winter 2022 and Winter 2024 by Debbie Leung.

A quantum error correcting code is, by itself, a passive object: it tells us how to encode information so that errors which have already occurred can in principle be undone. Fault-tolerant quantum computation is the active companion theory. It asks how, given imperfect gates and imperfect measurements, we can perform a long computation on encoded information without the noise compounding into an unrecoverable error. The question is non-trivial because every gate one performs to manipulate or correct an encoded qubit is itself a source of new errors. Without care a single faulty gate can spread errors among many physical qubits — exactly what a code should not have to absorb. The theory of fault tolerance, developed by Shor, Aharonov–Ben-Or, Kitaev, Knill–Laflamme–Zurek, Preskill, Gottesman, and many others in the late 1990s and early 2000s, gives a precise framework in which arbitrarily reliable quantum computation is possible provided the physical noise is below a constant threshold.

2.1 Circuit-Level Noise Models

The first task is to extend the i.i.d. Pauli noise model from memory qubits to gates, measurements, and preparations.

Definition 2.1 (Standard circuit noise). Each location in a circuit suffers an independent fault with probability \(p\): after every single-qubit gate, a uniformly random non-identity Pauli is applied with probability \(p\); after every two-qubit gate, a uniformly random non-identity element of \(\mathcal{P}_2\) is applied with probability \(p\); each preparation of \(|0\rangle\) is replaced by \(|1\rangle\) with probability \(p\); each measurement outcome is flipped with probability \(p\); each idle qubit suffers a depolarizing fault with probability \(p\) per time step.

This local stochastic noise model is the analytic backbone of all known threshold proofs. It is more general than i.i.d. depolarizing noise: it allows arbitrary correlations among faults provided the marginal probability of any specified set of faults occurring at all of a designated set of locations is at most \(p^k\) for \(k\) locations. Realistic noise is often well-approximated by such models, and some threshold proofs go further to handle non-Markovian noise as long as it remains weakly correlated in space and time.

Remark 2.2 (The catastrophe of error spread). Consider applying a CNOT between two physical qubits within a code block. If the control qubit suffers an undetected \(X\) error before the gate, the gate copies it to the target — turning a single error into two. If we naively use a single ancilla to extract a syndrome from a weight-\(w\) generator, a single \(X\) on that ancilla can spread to all \(w\) data qubits. A code of distance \(d\) cannot help if the very act of measuring it manufactures errors of weight \(d\) or larger. Fault-tolerant gadgets are precisely the engineering devices that prevent this catastrophe.

2.2 Fault-Tolerant Gadgets

The unit of analysis in fault tolerance is the gadget: a small subcircuit that implements a logical operation (gate, syndrome extraction, preparation, or measurement) on encoded qubits in such a way that a small number of physical faults cannot corrupt the logical output.

Definition 2.3 (Fault-tolerant gadget). Let \(G\) be a logical operation acting on encoded blocks of a distance-\(d\) code. A circuit \(\mathcal{G}\) implementing \(G\) is fault-tolerant if for every \(s \le \lfloor (d-1)/2 \rfloor\), any \(s\) faults in \(\mathcal{G}\) acting on encoded inputs of error weight \(\le t - s\) produce an output whose error weight is \(\le t\) on each output block. In other words, \(s\) physical faults in the gadget cannot create more than \(s\) errors per block on the output.

The simplest way to satisfy this property is by transversality.

Definition 2.4 (Transversal gate). A logical gate is transversal if it is implemented by a tensor product of physical gates, each acting on at most one qubit per code block (and one qubit per block of any other operands).
Theorem 2.5 (Transversality is fault-tolerant). Any transversal implementation of a logical operation is fault-tolerant in the sense of Definition 2.3, since a fault on a single physical gate can only produce a single new error on each output block.
Example 2.6 (Steane transversal Clifford gates). On the Steane \(\left[\!\left[7,1,3\right]\!\right]\) code, applying \(H^{\otimes 7}\) implements logical \(\overline{H}\), \(S^{\otimes 7}\) implements \(\overline{S}^{\dagger}\), and CNOT applied bitwise between two encoded blocks implements logical \(\overline{\mathrm{CNOT}}\). All Clifford gates are therefore transversal on the Steane code.

2.3 The Eastin–Knill Theorem

If we could implement every gate of a universal set transversally, fault tolerance would be straightforward. A celebrated no-go result rules this out.

Theorem 2.7 (Eastin--Knill, 2009). No nontrivial quantum error-detecting code on a finite-dimensional system admits a universal gate set implemented entirely by transversal logical operations. The set of transversal logical gates always forms a finite group.
Sketch. If a continuous family of transversal logical gates existed, its derivative at the identity would be a logical Hamiltonian acting as a sum of single-qubit terms. Each such term is a low-weight operator that detectably perturbs the codespace, contradicting either the code's error-detection property or the gate's logical action.

For the Steane code the transversal logical gate group is the Clifford group, which is far from universal — it can be efficiently simulated classically by the Gottesman–Knill theorem. To complete a universal set one must add a non-Clifford gate, conventionally the \(T = \mathrm{diag}(1, e^{i\pi/4})\) gate, and the cost of fault-tolerantly implementing it is the dominant overhead in any practical scheme.

2.4 Magic State Distillation

The most flexible non-transversal technique is gate teleportation with magic states, introduced by Gottesman–Chuang (1999) and elevated to a practical methodology by Bravyi–Kitaev (2005).

Definition 2.8 (Magic state). The \(T\)-magic state is \[ |T\rangle = \frac{1}{\sqrt{2}}\bigl(|0\rangle + e^{i\pi/4}|1\rangle\bigr) = T|+\rangle. \]

Given an encoded copy of \(|T\rangle\) and the ability to perform Clifford operations and \(Z\)-basis measurements, one can implement a logical \(T\) gate on any encoded qubit by gate teleportation.

Theorem 2.9 (Bravyi--Kitaev distillation). Given many noisy copies of \(|T\rangle\) with error rate \(\epsilon\) below a constant threshold, one can produce a smaller number of copies with error rate \(O(\epsilon^3)\) using only Clifford operations and measurements. Iterating yields exponentially clean magic states at polylogarithmic overhead in the desired output fidelity.

The original distillation protocol uses the punctured Reed–Muller \(\left[\!\left[15,1,3\right]\!\right]\) code, on which the transversal application of \(T^{\otimes 15}\) implements logical \(\overline{T}^{\dagger}\). Subsequent constructions (Meier–Eastin–Knill, Bravyi–Haah, Litinski) have reduced the overhead substantially, but magic state distillation still typically dominates the qubit budget of a fault-tolerant computation.

Remark 2.10 (The cost of universality). Estimates for breaking RSA-2048 with Shor's algorithm on a surface-code architecture (Gidney--Eker{\aa}, 2021) require roughly twenty million physical qubits, of which a substantial fraction are devoted to magic state factories rather than to running the algorithm itself. Reducing the cost of non-Clifford gates is therefore one of the most active areas of contemporary fault-tolerance research.

2.5 Concatenated Codes and the Threshold Theorem

The original threshold theorems used concatenation: encode a logical qubit using a code such as Steane’s, then encode each of the seven physical qubits of the inner block using the same code, and so on for several levels.

Definition 2.11 (Concatenated code). Given a base code \(C\) with parameters \(\left[\!\left[n, 1, d\right]\!\right]\), the level-\(L\) concatenation \(C^{(L)}\) has parameters \(\left[\!\left[n^L, 1, d^L\right]\!\right]\). Logical operations are implemented recursively from gadgets at each level.
Theorem 2.12 (Threshold theorem; Aharonov--Ben-Or, Kitaev, Knill--Laflamme--Zurek). There exists a constant \(p_{\mathrm{th}} > 0\) such that, for any \(p < p_{\mathrm{th}}\), any quantum circuit of size \(N\) can be simulated with arbitrary accuracy \(\delta > 0\) using \(O(\mathrm{polylog}(N/\delta) \cdot N)\) physical gates with local stochastic noise of strength \(p\).
Sketch. One organizes the noisy circuit into extended rectangles (gate gadget followed by error-correction gadgets on its outputs), and shows that under fault-tolerant gadget design, the probability that a single rectangle fails at level \(L\) is at most \(c\, (p/p_{\mathrm{th}})^{2^L}\). Provided \(p < p_{\mathrm{th}}\), the failure probability decays double-exponentially in \(L\), so any desired accuracy is achievable with logarithmic overhead per logical operation. The overall size blow-up is \(O(\log^c (N/\delta))\) per gate, polynomial only in logarithms.

The concatenated proof gives small but nonzero thresholds (around \(10^{-4}\) to \(10^{-5}\) for the Steane code with realistic syndrome circuits). Topological codes, especially the surface code, give substantially higher thresholds — a consequence of locality and of the matching decoder rather than of any intrinsic algebraic advantage.

Remark 2.13 (Assumptions of the threshold theorem). The standard threshold theorem requires four properties of the noise: it is local (each gadget sees independent faults), low-rate (\(p\) below threshold), Markovian or weakly non-Markovian, and that the controller can refresh ancillas — i.e., the device admits a steady supply of fresh \(|0\rangle\) preparations. The theorem fails if any of these is violated egregiously: pure unitary coherent errors can be dealt with by improvements (Aliferis--Gottesman--Preskill), but adversarial correlated noise at high rates cannot.

2.6 Fault-Tolerant Computation with Surface Codes

Modern proposals for scalable quantum hardware almost universally adopt the surface code rather than concatenated codes, on account of its locality and high threshold. Logical operations on surface-code patches are implemented through a striking combination of code deformation and lattice surgery.

Definition 2.14 (Lattice surgery). Two adjacent surface-code patches encoding logical qubits \(\overline{Q}_1, \overline{Q}_2\) can be merged by measuring the joint stabilizers along their shared boundary, projecting onto a joint eigenspace of \(\overline{Z}_1 \overline{Z}_2\) (or \(\overline{X}_1 \overline{X}_2\), depending on the boundary type). Subsequently splitting along the same boundary recovers two independent patches, the joint measurement having effected a logical multi-qubit Pauli measurement.

Lattice surgery, introduced by Horsman–Fowler–Devitt–Van Meter (2012), provides a clean implementation of all logical Clifford operations entirely through stabilizer measurements, with no need to perform two-qubit gates between distant patches. Combined with magic state distillation factories laid out alongside the data patches, it forms the basis of every contemporary surface-code compilation framework.

Example 2.15 (Defects, twists, and braiding). An older approach to surface-code computation, popularized by Raussendorf--Harrington--Goyal and by Fowler et al., encodes logical qubits in pairs of defects — regions where stabilizer generators are turned off — and implements logical CNOTs by braiding defects around one another. The braiding picture is conceptually elegant and connects directly to the topological-anyon picture of Kitaev, but lattice surgery has displaced it as the dominant practical technique because it uses fewer qubits per logical operation.
Remark 2.16 (Current overheads). A practical surface-code architecture targeting a logical error rate of \(10^{-12}\) per operation, starting from physical error rates near \(10^{-3}\), requires distance \(d \approx 25\) per logical qubit. Each logical qubit therefore consumes roughly \(2d^2 \approx 1250\) physical qubits. Magic state factories add a further factor of three to ten in physical qubit count for codes that need to run non-Clifford gates at high speed. A device with one million physical qubits could host on the order of one thousand logical qubits with sufficient quality to run small but nontrivial fault-tolerant algorithms.
Remark 2.17 (Beyond the surface code). The past five years have seen rapid progress on quantum LDPC codes — code families with constant rate and growing distance, such as the lifted-product codes of Panteleev--Kalachev (2022). These codes have higher rate than the surface code (asymptotically constant rather than \(O(1/d^2)\)) at the cost of requiring nonlocal connectivity. Whether this trade can be made practical in any platform is one of the central open questions in fault-tolerant quantum computation today.

The arc traced in these two chapters is steep: from the late-1990s discovery that quantum error correction is possible at all, through the development of the stabilizer formalism that organized the algebraic theory, to the threshold theorems that placed scalable quantum computing on a rigorous footing, and finally to the topological codes and lattice-surgery techniques that drive present-day engineering. The remaining obstacles to large-scale fault-tolerant quantum computation are no longer questions of principle. They are matters of overhead, of physical error rates, and of the search for codes and decoders that squeeze the most logical performance from the limited qubit budgets that physics will, for some years yet, allow.

Back to top