CO 789: Lattice-Based Cryptography
Douglas Stebila
Estimated study time: 26 minutes
Table of contents
These notes synthesize the standard lattice-cryptography curriculum from public sources: Micciancio and Goldwasser, Complexity of Lattice Problems: A Cryptographic Perspective (Springer, 2002); Regev, On Lattices, Learning with Errors, Random Linear Codes, and Cryptography (J. ACM, 2009); Peikert, A Decade of Lattice Cryptography (Foundations and Trends in Theoretical Computer Science, 2016, free at the author’s site); Lyubashevsky, Peikert, and Regev, On Ideal Lattices and Learning with Errors over Rings (EUROCRYPT 2010); Hoffstein, Pipher, and Silverman, An Introduction to Mathematical Cryptography (2nd ed., Springer); Galbraith, Mathematics of Public Key Cryptography (free at the author’s site); the NIST PQC standards documentation for Kyber/ML-KEM and Dilithium/ML-DSA; and various arXiv preprints on lattice algorithms and applications.
Chapter 1: Lattices and Their Geometry
Lattice-based cryptography draws its security from the apparent intractability of geometric problems on integer point sets. Before any cryptographic construction makes sense we must understand what a lattice is, how it is presented, and what numerical invariants govern its behaviour. The point of this opening chapter is to establish the geometry of lattices with enough precision that later reductions, which trade between algebraic structures over \(\mathbb{Z}_q\) and short vectors in real Euclidean space, become natural rather than mysterious.
1.1 Definitions and bases
A discrete subgroup is one in which every point has a neighbourhood containing only finitely many lattice points; equivalently the origin is isolated. The standard example is \(\Lambda = \mathbb{Z}^n\), but every lattice is, after a linear change of variables, a tilted and stretched copy of \(\mathbb{Z}^n\) inside \(\mathbb{R}^m\). A given lattice has infinitely many bases: if \(B\) is a basis and \(U \in \mathrm{GL}_n(\mathbb{Z})\) — that is, \(U\) is an integer matrix with determinant \(\pm 1\), called a unimodular matrix — then \(B' = BU\) is also a basis of \(\Lambda(B)\), and conversely every basis arises in this way. Computational lattice cryptography is in large part the study of which bases reveal which secrets.
Its \(n\)-dimensional volume \(\det(\Lambda) := \sqrt{\det(B^\top B)}\) depends only on the lattice \(\Lambda\), not on the choice of basis. When \(\Lambda\) is full-rank, \(\det(\Lambda) = \lvert \det(B) \rvert\).
The basis-independence is immediate: if \(B' = BU\) with \(U\) unimodular, then \(\det((B')^\top B') = \det(U^\top B^\top B U) = \det(U)^2 \det(B^\top B) = \det(B^\top B)\). Geometrically, every fundamental parallelepiped tiles the linear span of \(\Lambda\) by translates indexed by \(\Lambda\) itself, so they all enclose the same volume: one fundamental domain per lattice point.
1.2 Gram–Schmidt and successive minima
The cleanest invariants of a basis are the lengths of its Gram–Schmidt orthogonalization. Given \(b_1, \ldots, b_n\), define
\[ \tilde b_1 = b_1, \qquad \tilde b_i = b_i - \sum_{jThe Gram–Schmidt vectors \(\tilde b_i\) are pairwise orthogonal but generally not integer combinations of the \(b_i\), so they are not in the lattice; nevertheless their lengths control geometry. In particular, \(\det(\Lambda) = \prod_i \lVert \tilde b_i \rVert\), so a basis with very unequal Gram–Schmidt norms must contain long vectors when measured against the determinant.where \(\overline{B}(0,r)\) is the closed Euclidean ball of radius \(r\). Thus \(\lambda_1\) is the length of a shortest nonzero vector and \(\lambda_n\) is the smallest radius enclosing \(n\) linearly independent lattice vectors.
The infima are achieved because lattices are discrete. The successive minima encode at once the shortest, second-shortest-in-a-new-direction, and so on, lattice vectors. They satisfy \(\lambda_1 \le \lambda_2 \le \cdots \le \lambda_n\) and behave reasonably under sublattices, but they are notoriously hard to compute.
1.3 Minkowski’s theorems
The earliest deep result about lattices, dating to the geometry of numbers in the late 19th century, gives an unconditional upper bound on \(\lambda_1(\Lambda)\) in terms of the determinant.
Specializing \(K\) to the Euclidean ball gives the following corollary.
A sharper form, due also to Minkowski, controls all successive minima simultaneously: \(\bigl(\prod_{i=1}^n \lambda_i\bigr)^{1/n} \le \sqrt{n}\,\det(\Lambda)^{1/n}\), with a matching lower bound \(\bigl(\prod_i \lambda_i\bigr)^{1/n} \ge \det(\Lambda)^{1/n}/\sqrt{n}\) coming from a packing argument. Random lattices, in a precise sense, achieve these bounds up to a \(\sqrt{2\pi e}\) factor (the Gaussian heuristic): a “typical” \(n\)-dimensional lattice has \(\lambda_1 \approx \sqrt{n/(2\pi e)}\, \det(\Lambda)^{1/n}\).
1.4 Dual lattices
If \(B\) is a basis of a full-rank lattice, then \(B^{-\top}\) is a basis of \(\Lambda^*\), and \(\det(\Lambda^*) = 1/\det(\Lambda)\). One has \((\Lambda^*)^* = \Lambda\).
The dual is the “Fourier-side” lattice and it appears in two crucial places: in transference theorems relating \(\lambda_i(\Lambda)\) to \(\lambda_{n-i+1}(\Lambda^*)\) (Banaszczyk’s theorems give \(1 \le \lambda_i(\Lambda) \cdot \lambda_{n-i+1}(\Lambda^*) \le n\)), and in the worst-case to average-case reductions for SIS and LWE, where the random target lattice and its dual appear on opposite sides of the reduction.
Chapter 2: Computational Lattice Problems
We now switch from geometry to complexity. A lattice cryptosystem is secure only because some lattice problem is hard; the menagerie of lattice problems is therefore the catalogue of cryptographic hardness assumptions available to us.
2.1 The shortest and closest vector problems
Both are NP-hard (CVP under deterministic reductions, SVP under randomized reductions for the Euclidean norm, by van Emde Boas, Ajtai, and Micciancio). Cryptography, however, almost never relies on exactly solving these problems; instead it uses approximation versions.
- \(\mathsf{SVP}_\gamma\): find a nonzero \(v \in \Lambda\) with \(\lVert v \rVert \le \gamma \cdot \lambda_1(\Lambda)\).
- \(\mathsf{CVP}_\gamma\): find \(v \in \Lambda\) with \(\lVert v - t \rVert \le \gamma \cdot \mathrm{dist}(t, \Lambda)\).
- \(\mathsf{SIVP}_\gamma\) (Shortest Independent Vectors Problem): find \(n\) linearly independent lattice vectors all of length at most \(\gamma \cdot \lambda_n(\Lambda)\).
- \(\mathsf{BDD}_\alpha\) (Bounded Distance Decoding): given \(t\) with \(\mathrm{dist}(t, \Lambda) \le \alpha \cdot \lambda_1(\Lambda)\) for \(\alpha < 1/2\), find the unique closest lattice vector.
The decision variants \(\mathsf{GapSVP}_\gamma\) and \(\mathsf{GapCVP}_\gamma\) are gap promise problems: distinguish whether \(\lambda_1(\Lambda) \le 1\) or \(\lambda_1(\Lambda) > \gamma\).
Cryptography lives in the polynomial-factor regime \(\gamma = \mathrm{poly}(n)\), where no efficient algorithm is known but NP-hardness is also out of reach: this is the standard “structurally hard” zone where worst-case-to-average-case reductions for SIS and LWE place their guarantees.
Chapter 3: Lattice Basis Reduction
The algorithmic side of lattice cryptography rests on basis-reduction algorithms, which take an arbitrary basis and produce a more orthogonal one. Reduction algorithms are used both as cryptanalytic tools — to attempt to break LWE or NTRU — and as primitives inside cryptographic schemes (e.g. trapdoor sampling).
3.1 The LLL algorithm
The Lenstra–Lenstra–Lovász algorithm (1982), the foundational result of computational lattice theory, runs in polynomial time and outputs a basis whose first vector is short within a factor exponential in the dimension.
- Size reduction: \(\lvert \mu_{ij} \rvert \le \tfrac12\) for all \(j < i\);
- Lovász condition: \(\delta \lVert \tilde b_{i-1} \rVert^2 \le \lVert \tilde b_i + \mu_{i,i-1}\, \tilde b_{i-1} \rVert^2\) for all \(i \ge 2\).
3.2 BKZ and block reduction
For higher-quality reduction, the Block Korkine–Zolotarev (BKZ) algorithm of Schnorr (1987) generalizes LLL by performing exact SVP in successive blocks of size \(\beta\). With block size \(\beta\), BKZ achieves \(\lVert b_1 \rVert \le \gamma_\beta^{(n-1)/(\beta-1)} \det(\Lambda)^{1/n}\) where \(\gamma_\beta\) is Hermite’s constant; the time complexity grows roughly as \(2^{O(\beta)}\) per SVP call, multiplied by a polynomial number of tours. The trade-off is essentially the only knob attackers turn against LWE: pick the block size \(\beta\) that brings the output down to a target length, then estimate the cost of an SVP oracle in dimension \(\beta\). This yields the so-called core-SVP cost model used to compare candidate post-quantum schemes.
Chapter 4: The Short Integer Solution Problem
We now move to the average-case problems on which efficient lattice cryptography is built. The first is SIS, introduced by Ajtai (1996), who proved that random instances are no easier than worst-case lattice problems.
The set \(\Lambda_q^\perp(A) = \{ z \in \mathbb{Z}^m : Az \equiv 0 \pmod q\}\) is itself a lattice of dimension \(m\) and (typically) determinant \(q^n\), so SIS is exactly the search-SVP problem on this random “\(q\)-ary” lattice.
The reduction proceeds by sampling Gaussian-distributed lattice vectors using the assumed SIS oracle and applying these short combinations to recover a short basis of the input lattice.
4.1 One-way functions, hashing, and signatures from SIS
This was the first asymptotically efficient construction whose security reduced to a worst-case (rather than average-case) computational assumption — a hallmark of lattice cryptography.
Chapter 5: Learning With Errors
The single most important problem in modern lattice cryptography is Learning With Errors, introduced by Regev (2005). LWE plays the role for lattice cryptography that the discrete logarithm problem plays for elliptic-curve cryptography.
If the error distribution were \(0\), recovering \(s\) from \(m\) samples would be Gaussian elimination on the linear system \(b = As\); the noise transforms a trivial linear-algebra problem into a (conjecturally) hard one. Aggregating samples into matrix form, one writes \(b = A^\top s + e \bmod q\) for \(A \in \mathbb{Z}_q^{n\times m}\), \(b \in \mathbb{Z}_q^m\), \(e \leftarrow \chi^m\).
5.1 Regev’s quantum reduction
The reduction passes through the Bounded Distance Decoding problem on \(q\)-ary lattices, uses the LWE oracle to turn close-vector queries into discrete Gaussian samples, and crucially employs a quantum step to invert the Gaussian sampling and produce shorter vectors. A later result of Peikert (2009) and Brakerski–Langlois–Peikert–Regev–Stehlé (2013) gave a classical reduction with a larger modulus, dropping the quantum step at the cost of \(q\) growing exponentially with the SIVP factor.
5.2 Decision and search equivalence
For prime \(q = \mathrm{poly}(n)\), search and decision LWE are polynomial-time equivalent. The reverse reduction (decision implies search) is trivial. The forward reduction (search implies decision), due to Regev, recovers the secret coordinate-by-coordinate by guessing each \(s_i \in \mathbb{Z}_q\) and using the decision oracle to test the guess. The crucial cryptographic upshot is that ciphertext distinguishability — what one needs for IND-CPA security — already implies recovery of the entire secret, so any meaningful break of an LWE-based scheme yields a secret-key recovery attack.
Chapter 6: Public-Key Encryption from LWE
The construction of public-key encryption from LWE — Regev’s original contribution — is striking for its directness. The same noisy linear-algebra structure that makes the underlying problem hard yields encryption “for free”.
6.1 Regev’s encryption scheme
6.2 Dual Regev and the SIS/LWE duality
Dual Regev (Gentry–Peikert–Vaikuntanathan 2008) reverses the roles. The public key is a uniform \(A \in \mathbb{Z}_q^{n\times m}\); the secret key is a short \(e \in \mathbb{Z}^m\) with \(Ae = u\) for a chosen \(u \in \mathbb{Z}_q^n\) (i.e. an SIS-style trapdoor). To encrypt a bit \(\mu\), choose a random \(s \in \mathbb{Z}_q^n\), output \((c_0, c_1) = (A^\top s + e_0, u^\top s + e_1 + \mu \lfloor q/2 \rceil)\). Decrypting computes \(c_1 - \langle c_0, e\rangle\) to recover \(\mu\). The duality between primal and dual Regev reflects the duality between the two lattices \(\Lambda_q(A) = A^\top \mathbb{Z}_q^n + q\mathbb{Z}^m\) and \(\Lambda_q^\perp(A)\), which are scalings of each other’s duals. Dual Regev plays a foundational role in identity-based encryption (where \(u\) is derived from a user identity) and in the hierarchical IBE constructions of Cash, Hofheinz, Kiltz, and Peikert.
6.3 FrodoKEM and the Lindner–Peikert variant
Chapter 7: Discrete Gaussians and Trapdoor Sampling
The technical workhorse linking lattice geometry to cryptography is the discrete Gaussian distribution on a lattice. A great deal of modern lattice cryptography — from worst-case reductions to digital signatures — depends on being able to sample from this distribution efficiently, given a “good” basis.
where \(\rho_{\sigma,c}(\Lambda) = \sum_{y \in \Lambda} \rho_{\sigma,c}(y)\).
The Klein–Gentry–Peikert–Vaikuntanathan algorithm samples \(D_{\Lambda, \sigma, c}\) given any basis \(B\) provided \(\sigma \ge \max_i \lVert \tilde b_i \rVert \cdot \omega(\sqrt{\log n})\). The shorter the Gram–Schmidt vectors of \(B\), the smaller the achievable \(\sigma\), and consequently the shorter the sampled vectors. Peikert’s later “convolution” sampler and the Micciancio–Peikert “perturbation” technique drop the \(\omega(\sqrt{\log n})\) factor in important regimes.
7.1 Trapdoors and GPV signatures
A trapdoor for a uniformly random \(A \in \mathbb{Z}_q^{n\times m}\) is a basis of \(\Lambda_q^\perp(A)\) with small Gram–Schmidt norms. Ajtai (1999) and later Alwen–Peikert (2009) and Micciancio–Peikert (2012) constructed efficient algorithms to sample \((A, T)\) where \(A\) is statistically close to uniform and \(T\) is such a short basis. Given the trapdoor, one can sample short preimages: for any target \(u\), one can produce \(z\) with \(Az = u\) and \(\lVert z\rVert \le O(\sqrt{n \log q})\), distributed as a discrete Gaussian.
Security in the random-oracle model reduces to SIS: a forger producing \((\mu, z)\) for a fresh \(\mu\) yields a short \(z\) with \(Az = H(\mu)\), and combined with the simulator’s response one extracts a short SIS solution. The discrete Gaussian distribution is essential here: it is the only known way to sample preimages whose distribution does not leak the trapdoor. The NIST-standardized Falcon signature is a Ring-version of GPV using NTRU lattices and the fast Fourier sampler of Ducas–Prest, enabling signatures of about 666 bytes at category 1.
Chapter 8: Algebraically Structured Lattices
Plain LWE is too costly for many applications: a public key is \(O(n^2)\) ring elements. Replacing \(\mathbb{Z}\) by a polynomial ring squashes this to \(O(n)\), at the price of introducing algebraic structure that one must trust does not leak.
8.1 Ring-LWE
Let \(R = \mathbb{Z}\left[x\right]/\Phi_m(x)\) for a cyclotomic polynomial \(\Phi_m\) of degree \(n = \varphi(m)\), and \(R_q = R/qR\). The ring \(R\) is itself a lattice (under the canonical embedding into \(\mathbb{R}^n\) or \(\mathbb{C}^n\)), called the ring of integers of the \(m\)-th cyclotomic field; the resulting lattice has rich algebraic symmetry.
A single RLWE sample compresses \(n\) plain-LWE samples sharing the same secret into \(O(n)\) bits, with multiplication in \(R_q\) computable in \(O(n \log n)\) using the Number Theoretic Transform. This is what gives RLWE-based schemes their speed advantage.
8.2 NTRU
The NTRU cryptosystem (Hoffstein, Pipher, Silverman, 1996) predates LWE and uses a distinct algebraic problem. Let \(R = \mathbb{Z}\left[x\right]/(x^n - 1)\) (or \(x^n + 1\)) and choose two short polynomials \(f, g \in R\) with \(f\) invertible mod \(q\). The public key is \(h = g \cdot f^{-1} \bmod q\); the secret key is \((f, g)\). The hardness assumption is that distinguishing \(h\) from a uniform element of \(R_q\) is intractable. NTRU encryption pairs short messages with this trapdoor structure; modern constructions (NTRU Prime, Falcon’s underlying trapdoor) refine the parameter choices to defeat known subfield-lattice attacks of Albrecht–Bai–Ducas (2016), which exploit large modulus and overstretched parameters.
Chapter 9: Fiat–Shamir Signatures and Dilithium
The Fiat–Shamir transform converts an identification scheme (a three-message public-coin protocol) into a signature scheme by replacing the verifier’s challenge with a hash of the commitment and message. Lyubashevsky’s 2009 application of this transform to lattice problems is the source of the Dilithium signature standard.
9.1 Lyubashevsky’s identification
The tension is that \(\mathbf z\) leaks information about \(\mathbf s\) unless the masking distribution is wide enough. To handle this, Lyubashevsky introduced rejection sampling: the prover restarts whenever \(\mathbf z\) lies outside a fixed region, ensuring the conditional distribution of \(\mathbf z\) is independent of \(\mathbf s\). This is one of the most elegant ideas in the field, and it eliminates the need for the expensive Gaussian preimage sampling that GPV-style signatures require.
Chapter 10: Kyber and Lattice-Based KEMs
CRYSTALS-Kyber (Bos, Ducas, Kiltz, Lepoint, Lyubashevsky, Schanck, Schwabe, Seiler, Stehlé 2018), standardized by NIST as ML-KEM (FIPS 203), is the dominant post-quantum key encapsulation mechanism. It is essentially a Module-LWE-based variant of Lindner–Peikert encryption combined with the Fujisaki–Okamoto transform to achieve IND-CCA security.
10.1 Construction
The Fujisaki–Okamoto transform then upgrades this to a chosen-ciphertext-secure KEM by using the message as a re-randomization seed and re-encrypting during decapsulation to detect tampering. Compression of ciphertext coefficients (dropping low-order bits) further shrinks bandwidth.
10.2 Concrete security
The security of Kyber, like every modern lattice KEM, is estimated by the cost of running BKZ to find a short vector in a particular embedded lattice (the so-called primal attack), or to distinguish via the dual attack. The core-SVP methodology estimates the cost as \(2^{0.292\,\beta}\) (classical sieving) or \(2^{0.265\,\beta}\) (quantum sieving) for the smallest block size \(\beta\) at which BKZ would succeed. Kyber’s three parameter sets target NIST security categories 1, 3, and 5, corresponding roughly to AES-128, AES-192, and AES-256.
Chapter 11: Fully Homomorphic Encryption
Lattice cryptography is unique among public-key paradigms in supporting fully homomorphic encryption (FHE), a scheme that lets a server compute on encrypted data without decrypting it. Gentry’s 2009 thesis was the breakthrough; the current generation of schemes — BGV, BFV, CKKS — descends from it.
11.1 The basic homomorphism
The homomorphic property comes “for free” from LWE-based encryption. If \((u_i, v_i)\) encrypts \(\mu_i\) under secret \(s\), then \((u_1 + u_2, v_1 + v_2)\) encrypts \(\mu_1 + \mu_2 \bmod 2\), since the noise terms simply add. Multiplication is harder: it produces a ciphertext under a tensored secret \(s \otimes s\) with noise that grows quadratically; one then performs relinearization using a “key-switching” public key to convert back to a ciphertext under \(s\).
The fundamental obstacle is noise growth. After many operations the noise saturates and decryption fails. Gentry’s seminal idea was bootstrapping: homomorphically evaluate the decryption circuit, using the encryption of the secret key as auxiliary input, to “refresh” a noisy ciphertext into a less-noisy one encrypting the same plaintext. Provided the scheme can homomorphically evaluate its own decryption with budget to spare, one obtains unbounded computation. This requires a circular-security assumption (the scheme remains secure when given an encryption of its own secret key), an assumption with no known reduction but also no known attack.
11.2 BGV, BFV, CKKS, and TFHE
The “leveled” BGV/BFV/CKKS schemes support a fixed depth of operations without bootstrapping; bootstrapping is added when unbounded depth is needed, at substantial cost. The TFHE family (Chillotti–Gama–Georgieva–Izabachène) takes a different route, supporting fast bootstrapping after every gate at the cost of slower bulk arithmetic, which is the better choice for circuits of irregular structure.
Chapter 12: Concrete Security and Parameter Selection
A theoretically secure scheme is useless without parameters. Lattice cryptography has developed a sophisticated, if still partly heuristic, framework for choosing them.
12.1 The attack toolbox
Given an LWE instance \((A, b = As + e)\) with \(A \in \mathbb{Z}_q^{m\times n}\), three families of attacks dominate.
The primal attack embeds the LWE instance in a lattice via Kannan’s construction. The relevant lattice
\[ \Lambda = \left\{ (x, y, z) : \left[A \mid I_m \mid -b\right] \begin{pmatrix} x \\ y \\ z \end{pmatrix} \equiv 0 \pmod q \right\} \]contains the unusually short vector \((s, e, 1)\). Running BKZ-\(\beta\) on a basis of (a sublattice of) \(\Lambda\) recovers this short vector when \(\beta\) is large enough that the projected length of \((s,e,1)\) onto the last block is shorter than the Gaussian-heuristic length of the projected lattice — the so-called “2016 estimate” of Alkim–Ducas–Pöppelmann–Schwabe.
The dual attack seeks short vectors in \(\Lambda_q^\perp(A)\); applying such a vector \(w\) to \(b\) yields \(\langle w, b\rangle = \langle w, e\rangle\), distinguishable from uniform when \(w\) is short enough. Recent improvements by Guo–Johansson and the MATZOV team combine dual reduction with brute-force enumeration of secret coordinates.
Combinatorial attacks (BKW of Blum–Kalai–Wasserman, Arora–Ge linearization for very small noise) target sparse-secret or low-noise regimes. None is competitive against well-parameterized cryptographic LWE.
12.2 The LWE estimator
The general design philosophy is: pick \(n\) and \(q\) so that the smallest BKZ block size that breaks the scheme is at least \(\beta \approx 400\) (for category 1) or higher; check that the error distribution gives correct decryption with overwhelming probability; ensure the modulus is small enough for efficient NTT-based arithmetic. A modulus \(q \equiv 1 \pmod {2n}\) admits a length-\(n\) NTT, cutting multiplication costs to \(O(n \log n)\); Kyber’s choice of \(q = 3329\) and \(n = 256\) is calibrated precisely to fit this constraint while keeping coefficients small enough to multiply in 16-bit words.
12.3 Outlook
The standardization of Kyber and Dilithium in 2024, and the rollout of hybrid post-quantum TLS in 2024–2025 by Cloudflare, Google, and Apple, have moved lattice cryptography from a research curiosity to deployed infrastructure within a single decade. The conjecture that \(\mathsf{GapSVP}_\gamma\) and \(\mathsf{SIVP}_\gamma\) for polynomial \(\gamma\) resist quantum attack — the foundation of every post-quantum lattice scheme — remains untouched, though the gap between theoretical worst-case-to-average-case reductions and the parameters actually deployed remains large.
Sharpening this gap, probing the algebraic structure introduced by RLWE and Module-LWE, refining sieving algorithms, and finding new applications of trapdoor sampling and FHE are the central research directions of the next decade. The reader who has followed the geometry of Chapter 1 through the deployed standards of Chapters 9 and 10 should now be equipped to read Peikert’s survey, the NIST submissions, and the latest cryptanalytic preprints with confidence — and, more importantly, to recognize in any new construction the recurring melody of short vectors, noisy linear equations, and the Gaussian distributions that connect them.