CO 781: Topics in Quantum Information
Debbie Leung
Estimated study time: 34 minutes
Table of contents
These notes synthesize material from two offerings of CO 781 (Topics in Quantum Information) by Debbie Leung at the University of Waterloo, both centred on quantum error correction and fault-tolerant quantum computation. The exposition draws on the standard public literature: Nielsen and Chuang’s Quantum Computation and Quantum Information (Cambridge UP, Chapter 10); Daniel Gottesman’s pedagogical survey An Introduction to Quantum Error Correction and Fault-Tolerant Quantum Computation (arXiv:0904.2557) and his PhD thesis Stabilizer Codes and Quantum Error Correction (arXiv:quant-ph/9705052); Kitaev’s foundational paper Fault-tolerant quantum computation by anyons (arXiv:quant-ph/9707021); Fowler, Mariantoni, Martinis and Cleland, Surface codes: Towards practical large-scale quantum computation (Phys. Rev. A 86; arXiv:1208.0928); Terhal’s review Quantum error correction for quantum memories (Rev. Mod. Phys. 87; arXiv:1302.3428); and Watrous, The Theory of Quantum Information (Cambridge UP; public draft). Supplementary material was located through public lecture notes available online.
Chapter 1: Quantum Error Correction
Taught at UW as CO 781 in Winter 2022 and Winter 2024 by Debbie Leung.
The story of quantum error correction begins with a paradox. A qubit lives in a continuous state space, and unlike a classical bit it cannot be copied; the no-cloning theorem forbids the redundancy on which classical error correction is built. Errors on quantum systems form a continuum and include not only bit flips but also relative-phase errors and arbitrary unitary perturbations. For some years after the discovery of Shor’s algorithm it was widely believed that this combination of fragility and constraint placed quantum information beyond the reach of any practical error correction. The discovery in 1995 of the nine-qubit code by Peter Shor, followed within months by the seven-qubit code of Steane and the systematic stabilizer formalism of Gottesman, dispelled that pessimism. The aim of this chapter is to develop the core ideas — the noise model, the Knill–Laflamme conditions, the first explicit codes, the stabilizer formalism with its CSS subfamily, syndrome extraction and decoding, and finally the topological codes which dominate contemporary engineering.
1.1 Noise Models for Quantum Information
Quantum noise is described in the language of completely positive trace-preserving (CPTP) maps. Any physical evolution of a system \(A\) interacting with an environment \(E\), followed by tracing out \(E\), has a Kraus representation
\[ \mathcal{N}(\rho) = \sum_k E_k \rho E_k^{\dagger}, \qquad \sum_k E_k^{\dagger} E_k = I. \]For qubit systems the Pauli operators \(\left\{I, X, Y, Z\right\}\) span the operator algebra and provide a discrete basis in which to expand the Kraus operators of an arbitrary \(\mathcal{N}\). It is therefore not a loss of generality to design codes that correct only the Pauli errors \(\left\{X_j, Y_j, Z_j\right\}\) on each qubit \(j\); the linearity of quantum mechanics promotes correction of these basis errors to correction of any error in their span.
The depolarizing channel with strength \(p\) corresponds to \(p_X = p_Y = p_Z = p/3\), so that \(\mathcal{N}(\rho) = (1-p)\rho + (p/3)(X\rho X + Y\rho Y + Z\rho Z)\).
For \(n\) qubits the noise is most often modelled as independent and identically distributed: the global channel acts as \(\mathcal{N}^{\otimes n}\). This i.i.d. assumption is what gives meaning to the notion of a code’s distance: an error of weight \(w\) (a tensor product of non-identity Paulis on \(w\) qubits and identities elsewhere) occurs with probability \(O(p^w)\), so suppressing all errors of weight \(\le t\) suppresses the residual logical error to \(O(p^{t+1})\).
1.2 The Knill–Laflamme Conditions
A quantum code \(\mathcal{C} \subseteq (\mathbb{C}^2)^{\otimes n}\) is a subspace into which a smaller logical Hilbert space is embedded. We write \(P_{\mathcal{C}}\) for the projector onto \(\mathcal{C}\). Let \(\mathcal{E} = \left\{E_a\right\}\) be a set of error operators we wish to correct.
for all \(a, b\). The code is non-degenerate when \(\left[c_{ab}\right]\) has full rank and degenerate when it does not.
The condition has a transparent meaning. Distinct errors must take the codespace into mutually orthogonal subspaces — only then can a syndrome measurement distinguish them — but they may also act identically on the codespace, in which case they do not need to be told apart at all. The latter possibility, which has no classical analogue, is the phenomenon of degeneracy. Shor’s nine-qubit code is degenerate against single-qubit \(Z\) errors precisely because pairs of \(Z\) errors within a triple act trivially on the encoded state.
1.3 The Nine-Qubit Shor Code
Shor’s construction of 1995 was the first explicit demonstration that arbitrary single-qubit errors could be corrected. The idea is to concatenate two classical repetition codes — one for bit flips and one for phase flips — using nine physical qubits per logical qubit.
Within each block of three the code is a phase-flip repetition code (after Hadamard transformation, a bit-flip code), and the three blocks together form the outer bit-flip code. A single \(X\) error inside a block is detected by comparing parities of pairs within that block; a single \(Z\) error is detected by comparing the relative signs of the three blocks.
The Shor code already exhibits all the features that pervade the subject: redundant physical qubits encoding a smaller logical space, syndrome extraction by parity measurements that do not disturb the encoded information, and degeneracy in the form of phase errors within a block.
1.4 The Stabilizer Formalism
Shor’s code, the Steane code, and most other useful quantum codes are stabilizer codes, a unifying framework introduced by Gottesman in his 1997 thesis. The Pauli group on \(n\) qubits is
\[ \mathcal{P}_n = \left\{ \alpha\, P_1 \otimes \cdots \otimes P_n : \alpha \in \left\{\pm 1, \pm i\right\},\ P_j \in \left\{I,X,Y,Z\right\} \right\}. \]If \(\mathcal{S}\) has \(n-k\) independent generators, then \(\dim \mathcal{C}(\mathcal{S}) = 2^k\).
The logical operators of the code are elements of the normalizer \(N(\mathcal{S})\) of \(\mathcal{S}\) in \(\mathcal{P}_n\), modulo \(\mathcal{S}\) itself. Distance is the minimum weight over \(N(\mathcal{S}) \setminus \mathcal{S}\). Error correction proceeds by measuring each generator \(g_i\); the measurement is non-destructive on the codespace, but reveals a binary syndrome equal to the eigenvalue of \(g_i\) on the corrupted state. The collection of syndromes localizes which Pauli error has occurred up to multiplication by stabilizers.
1.5 CSS Codes and the Steane Code
A particularly clean subclass arises when the stabilizer can be split into pure-\(X\) and pure-\(Z\) generators.
CSS codes correct \(X\) and \(Z\) errors independently using the syndrome of the underlying classical codes. Steane’s seven-qubit code is the CSS code obtained from the \(\left[7,4,3\right]\) Hamming code paired with itself; it has parameters \(\left[\!\left[7,1,3\right]\!\right]\) and is the smallest distance-three CSS code.
the stabilizer of the Steane code consists of three \(X\)-type generators with support pattern given by the rows of \(H\) and three \(Z\)-type generators with the same supports. The logical \(\overline{X}\) and \(\overline{Z}\) operators act as \(X^{\otimes 7}\) and \(Z^{\otimes 7}\) respectively; both have weight seven, but lower-weight representatives exist within the cosets, all of weight at least three.
The Steane code’s structure makes the entire Clifford group transversal — applying \(H\) bitwise on all seven qubits implements logical \(\overline{H}\), and similarly for \(\overline{S}\) and \(\overline{\mathrm{CNOT}}\). This is the source of its enduring popularity in pedagogical and theoretical work.
1.6 Syndrome Measurement and Decoding
Extracting a syndrome means measuring each stabilizer generator without learning anything about the logical state. The standard circuit uses an ancilla qubit prepared in \(|+\rangle\), a sequence of controlled Paulis from the ancilla to each qubit in the support of the generator, and a final \(X\)-basis measurement. The ancilla’s outcome is the eigenvalue of the generator on the data.
Optimal (maximum-likelihood) decoding is generally hard. For the surface code it is equivalent to a minimum-weight perfect matching problem on a graph whose vertices are syndrome bits, solvable in polynomial time by Edmonds’ blossom algorithm. For more general codes one uses belief propagation, neural decoders, or tensor-network contraction; the choice of decoder is now a major engineering issue in quantum hardware.
1.7 Topological Codes: the Toric and Surface Codes
In 1997 Kitaev introduced a class of stabilizer codes whose generators are geometrically local — they act only on small clusters of nearby qubits arranged on a two-dimensional lattice. These topological codes have proved extraordinarily well-suited to physical implementation.
The two logical qubits are associated with the two non-contractible cycles of the torus: a logical \(\overline{Z}\) is a string of \(Z\)’s along a horizontal cycle, a logical \(\overline{X}\) a string of \(X\)’s along a perpendicular vertical cycle of the dual lattice. Errors create pairs of anyon excitations, and decoding is the problem of pairing them up by the shortest paths.
The non-locality of logical operators is the geometric expression of the code’s distance, and gives the surface code its characteristic robustness: only correlated errors spanning the whole device can produce a logical fault, and such correlations are exponentially suppressed in any reasonable noise model.
Chapter 2: Fault-Tolerant Quantum Computation
Taught at UW as CO 781 in Winter 2022 and Winter 2024 by Debbie Leung.
A quantum error correcting code is, by itself, a passive object: it tells us how to encode information so that errors which have already occurred can in principle be undone. Fault-tolerant quantum computation is the active companion theory. It asks how, given imperfect gates and imperfect measurements, we can perform a long computation on encoded information without the noise compounding into an unrecoverable error. The question is non-trivial because every gate one performs to manipulate or correct an encoded qubit is itself a source of new errors. Without care a single faulty gate can spread errors among many physical qubits — exactly what a code should not have to absorb. The theory of fault tolerance, developed by Shor, Aharonov–Ben-Or, Kitaev, Knill–Laflamme–Zurek, Preskill, Gottesman, and many others in the late 1990s and early 2000s, gives a precise framework in which arbitrarily reliable quantum computation is possible provided the physical noise is below a constant threshold.
2.1 Circuit-Level Noise Models
The first task is to extend the i.i.d. Pauli noise model from memory qubits to gates, measurements, and preparations.
This local stochastic noise model is the analytic backbone of all known threshold proofs. It is more general than i.i.d. depolarizing noise: it allows arbitrary correlations among faults provided the marginal probability of any specified set of faults occurring at all of a designated set of locations is at most \(p^k\) for \(k\) locations. Realistic noise is often well-approximated by such models, and some threshold proofs go further to handle non-Markovian noise as long as it remains weakly correlated in space and time.
2.2 Fault-Tolerant Gadgets
The unit of analysis in fault tolerance is the gadget: a small subcircuit that implements a logical operation (gate, syndrome extraction, preparation, or measurement) on encoded qubits in such a way that a small number of physical faults cannot corrupt the logical output.
The simplest way to satisfy this property is by transversality.
2.3 The Eastin–Knill Theorem
If we could implement every gate of a universal set transversally, fault tolerance would be straightforward. A celebrated no-go result rules this out.
For the Steane code the transversal logical gate group is the Clifford group, which is far from universal — it can be efficiently simulated classically by the Gottesman–Knill theorem. To complete a universal set one must add a non-Clifford gate, conventionally the \(T = \mathrm{diag}(1, e^{i\pi/4})\) gate, and the cost of fault-tolerantly implementing it is the dominant overhead in any practical scheme.
2.4 Magic State Distillation
The most flexible non-transversal technique is gate teleportation with magic states, introduced by Gottesman–Chuang (1999) and elevated to a practical methodology by Bravyi–Kitaev (2005).
Given an encoded copy of \(|T\rangle\) and the ability to perform Clifford operations and \(Z\)-basis measurements, one can implement a logical \(T\) gate on any encoded qubit by gate teleportation.
The original distillation protocol uses the punctured Reed–Muller \(\left[\!\left[15,1,3\right]\!\right]\) code, on which the transversal application of \(T^{\otimes 15}\) implements logical \(\overline{T}^{\dagger}\). Subsequent constructions (Meier–Eastin–Knill, Bravyi–Haah, Litinski) have reduced the overhead substantially, but magic state distillation still typically dominates the qubit budget of a fault-tolerant computation.
2.5 Concatenated Codes and the Threshold Theorem
The original threshold theorems used concatenation: encode a logical qubit using a code such as Steane’s, then encode each of the seven physical qubits of the inner block using the same code, and so on for several levels.
The concatenated proof gives small but nonzero thresholds (around \(10^{-4}\) to \(10^{-5}\) for the Steane code with realistic syndrome circuits). Topological codes, especially the surface code, give substantially higher thresholds — a consequence of locality and of the matching decoder rather than of any intrinsic algebraic advantage.
2.6 Fault-Tolerant Computation with Surface Codes
Modern proposals for scalable quantum hardware almost universally adopt the surface code rather than concatenated codes, on account of its locality and high threshold. Logical operations on surface-code patches are implemented through a striking combination of code deformation and lattice surgery.
Lattice surgery, introduced by Horsman–Fowler–Devitt–Van Meter (2012), provides a clean implementation of all logical Clifford operations entirely through stabilizer measurements, with no need to perform two-qubit gates between distant patches. Combined with magic state distillation factories laid out alongside the data patches, it forms the basis of every contemporary surface-code compilation framework.
The arc traced in these two chapters is steep: from the late-1990s discovery that quantum error correction is possible at all, through the development of the stabilizer formalism that organized the algebraic theory, to the threshold theorems that placed scalable quantum computing on a rigorous footing, and finally to the topological codes and lattice-surgery techniques that drive present-day engineering. The remaining obstacles to large-scale fault-tolerant quantum computation are no longer questions of principle. They are matters of overhead, of physical error rates, and of the search for codes and decoders that squeeze the most logical performance from the limited qubit budgets that physics will, for some years yet, allow.