In short
Post-quantum cryptography (PQC) is the set of classical cryptographic algorithms — running on ordinary laptops, phones, and servers — designed to resist attack by a future fault-tolerant quantum computer. In 2024 the U.S. National Institute of Standards and Technology (NIST) finalised four PQC standards after an eight-year public competition: ML-KEM (based on the Kyber lattice KEM) for key exchange; ML-DSA (Dilithium lattice signatures) and FN-DSA (FALCON lattice signatures) for digital signatures; and SLH-DSA (SPHINCS+ hash-based signatures) as a conservative backup whose security rests only on hash functions. The urgent reason to migrate now, before any quantum computer threatens RSA, is harvest-now-decrypt-later: adversaries already record your encrypted traffic and will decrypt it once Q-day arrives. If your secret must stay secret for 20 years and migration takes 5 years, you were late yesterday. For Aadhaar (biometrics good for a lifetime), UPI (transaction logs for a decade), and India's certifying authorities, the National Quantum Mission has set a 2028–2030 transition window. Google Chrome, Cloudflare, and Apple iMessage already deploy hybrid PQC+ECC in production; Indian financial infrastructure is beginning crypto-agility audits.
"Post-quantum" sounds like a future problem. It is not. Open your phone's TLS 1.3 handshake to state-bank.co.in — the one it performed two seconds ago when you opened the app — and ask this question: how long must the secrets passed during that handshake stay secret? If the answer is "five minutes" then you can ignore quantum computers; by the time any Shor-capable machine exists your transaction is ancient history. If the answer is "twenty years" — the depth of an Aadhaar biometric record, a medical chart, a classified government communication — then the ciphertext recorded on a submarine tap today is decryptable tomorrow, for some tomorrow that is probably within the working life of the people who made the recording.
This chapter is about the most boring and most important engineering problem in contemporary cryptography: replacing the algorithms your bank, your government, and your WhatsApp already run — before the machine that breaks them comes online. It is a software migration, not a physics experiment. It is happening now.
Post-quantum is classical — the first misconception to kill
PQC is not quantum. No PQC algorithm uses qubits, entanglement, or any quantum hardware. The algorithms all run on ordinary CPUs. The word "post-quantum" refers only to what they resist: an attacker with a large quantum computer.
The hardness assumptions PQC uses are different from the factoring or discrete-log assumptions Shor breaks. Lattice problems (Learning With Errors, short-integer-solution), coding problems (decoding a random linear code), hash-function second-preimage resistance, and multivariate polynomial systems are the main families. None of these is known to be solvable quickly on a quantum computer. None of them has a period-finding structure that the quantum Fourier transform can exploit the way it exploits modular exponentiation. The best known quantum attacks on these problems are either Grover-flavoured speedups (which halve key search) or specialised sieves (which shave constants off exponents) — not the polynomial-time collapse that Shor inflicts on RSA.
Note the qualifier known. PQC security rests on the conjecture that no future quantum algorithm will structure-break these problems the way Shor structure-broke RSA. The NIST competition was designed precisely to stress-test this conjecture publicly, with eight years of open cryptanalysis from teams worldwide.
Harvest-now, decrypt-later — why the timeline is already short
The quantum threat model chapter (chapter 151) introduced the Mosca heuristic: your data stays safe if X + Z \le Y, where X is your secrecy horizon, Z is your migration time, and Y is the years until a cryptographically-relevant quantum computer exists. Re-read that rule, and notice what it does not depend on: the arrival date of the quantum computer. All it asks is that migration be finished before (the arrival date minus your secrecy horizon). For secrets that must hold for decades, "finish migration before arrival" is already not enough — you must finish migration before today minus decades ago, which was yesterday.
The arithmetic is unforgiving. A state-level adversary runs a submarine-cable tap on an Indian international fibre link. Every TLS handshake and every VPN session is recorded as ciphertext into a petabyte-class cold-storage vault. Today the vault is useless — classical cryptanalysis of an AES-256-protected TLS session takes longer than the age of the universe. Fifteen years from now, if a Shor-capable machine comes online, the ECDHE key-exchange portion of each 2026 handshake becomes solvable in polynomial time, the session key is recovered, and the symmetric ciphertext unlocks. Every diplomatic cable, every defence procurement negotiation, every RBI-to-bank authenticated link sent today is then readable — retroactively.
The defence is not "build a bigger RSA key." Shor's runtime is polynomial in the key length; doubling a 2048-bit RSA modulus to 4096 bits roughly triples Shor's quantum cost. The defence is "replace the primitive entirely, before the adversary finishes harvesting."
NIST's four standards — the post-quantum zoo
In 2016 NIST opened a public competition to standardise quantum-resistant replacements for RSA/ECDH (key exchange) and RSA/ECDSA (signatures). Seventy-two first-round submissions from teams across 25 countries were subjected to eight years of cryptanalysis, pruning to four finalists in 2022, two more backups in 2024, and culminating in three FIPS standards (plus a fourth imminent):
- FIPS 203 — ML-KEM (published August 2024): based on the Kyber lattice KEM. Replaces RSA/ECDH in key exchange.
- FIPS 204 — ML-DSA (published August 2024): based on the Dilithium lattice signature. Replaces RSA/ECDSA in signing.
- FIPS 205 — SLH-DSA (published August 2024): based on SPHINCS+, a stateless hash-based signature. Conservative backup whose security rests only on hash assumptions.
- FIPS 206 — FN-DSA (draft, 2024; final 2025): based on FALCON, another lattice signature, with shorter signatures than Dilithium at the cost of complex implementation.
In 2024 NIST also selected HQC (code-based KEM) as a backup for ML-KEM in case a future attack degrades lattice security. A running "Round 4" evaluates further candidates — Classic McEliece, BIKE — for niche deployments.
Why three lattice-based out of four
Lattice-based cryptography hits the sweet spot of speed, size, and confidence. A Kyber key is about 1.1 kB; a Dilithium signature about 2.4 kB. These are 3–10× larger than their RSA/ECC predecessors but still small enough to fit in a TLS handshake without noticeable latency. The underlying hardness — the Learning With Errors problem, introduced by Regev (2005) — has a reduction from worst-case lattice problems to average-case instances, a rare and desirable cryptographic property: breaking a random LWE instance is provably as hard as breaking the worst case of an underlying lattice problem. The best classical algorithms (BKZ-style lattice sieving) and the best quantum algorithms (sieving with a Grover-flavoured speedup) both run in 2^{\Theta(n)} time for security parameter n; no polynomial-time quantum attack is known.
Why one hash-based — diversity as insurance
SPHINCS+ is included in the standards not because it is fast (it is slow) or small (its signatures are 8–50 kB, the largest of the four), but because its security rests on a completely different assumption: the second-preimage resistance of a hash function. If a future cryptanalytic breakthrough destroys lattice security, SPHINCS+ survives. NIST's choice of one lattice KEM + two lattice signatures + one hash signature gives a diverse portfolio, so a single mathematical surprise does not kill everything at once.
Kyber — concrete parameters and the LWE sketch
Take the ML-KEM-512 parameter set (roughly equivalent to AES-128 security). The key generator samples:
- a matrix A \in \mathbb{Z}_q^{k \times k} of uniformly random small polynomials (public)
- a secret \mathbf{s} \in \mathbb{Z}_q^k of small polynomials (private)
- a small error \mathbf{e} \in \mathbb{Z}_q^k
The public key is \mathbf{b} = A\mathbf{s} + \mathbf{e}. To encapsulate a shared key, Alice samples another small randomness \mathbf{r}, \mathbf{e}_1, \mathbf{e}_2 and sends the ciphertext (\mathbf{u}, v) = (A^T \mathbf{r} + \mathbf{e}_1, \ \mathbf{b}^T \mathbf{r} + \mathbf{e}_2 + \lfloor q/2 \rfloor \cdot m), where m is the message-bit. To decapsulate, Bob computes v - \mathbf{s}^T \mathbf{u}, rounds to the nearest multiple of \lfloor q/2 \rfloor, and recovers m.
Why this works: expand v - \mathbf{s}^T \mathbf{u} = \mathbf{b}^T \mathbf{r} + \mathbf{e}_2 + \lfloor q/2\rfloor m - \mathbf{s}^T A^T \mathbf{r} - \mathbf{s}^T \mathbf{e}_1 = (\mathbf{e}^T \mathbf{r} - \mathbf{s}^T \mathbf{e}_1 + \mathbf{e}_2) + \lfloor q/2 \rfloor m. The error term is a sum of small quantities and stays within \pm q/4; the message bit m contributes \lfloor q/2 \rfloor \cdot m, and rounding reveals which value of m was sent. Without \mathbf{s}, recovering m from (\mathbf{u}, v) requires solving an LWE instance — believed quantum-hard.
Concrete ML-KEM-512 sizes: public key 800 bytes, ciphertext 768 bytes, shared secret 32 bytes, key-generation cost ≈ 100 microseconds on a modern x86 core. The numbers are slightly bigger than X25519's (32-byte public key, 32-byte ciphertext) but small enough that a hybrid TLS handshake combining both is still well under the typical TLS record size.
Worked example — hybrid TLS 1.3 handshake
Example 1: a hybrid Kyber + X25519 TLS 1.3 handshake
Setup. A browser (Chrome on Android) opens a TLS 1.3 connection to a padho-wiki server at padho.wiki. Both endpoints speak the hybrid key-exchange extension — Chrome negotiated this with Cloudflare's Kyber+X25519 launch in 2024. The goal: exchange a fresh 32-byte secret that is safe against both classical and quantum attackers.
Step 1 — ClientHello. The client generates two key pairs and sends the public halves.
- An X25519 ephemeral key pair: 32-byte classical ECDH.
- An ML-KEM-768 key pair: 1184-byte public key.
The key_share extension carries both. Total handshake overhead: ≈ 1.2 kB additional — well within a TCP window, adds no round trip. Why hybrid: X25519 defends against today's real classical cryptanalysts. ML-KEM defends against tomorrow's quantum cryptanalysts. Hybrid means an attacker must break both — if even one holds, the shared secret is safe.
Step 2 — ServerHello. The server receives both public keys. It generates its own X25519 key pair and encapsulates a random ML-KEM ciphertext against the client's Kyber public key.
- It computes ss_{\text{ECDH}} = X25519(\text{server-priv}, \text{client-pub}) — a 32-byte shared secret from the classical side.
- It encapsulates: (\text{ct}, ss_{\text{KEM}}) = \text{ML-KEM.Encap}(\text{client-Kyber-pub}). \text{ct} is a 1088-byte ciphertext; ss_{\text{KEM}} is a 32-byte shared secret from the post-quantum side.
- It replies with ServerHello carrying server's X25519 public key + the Kyber ciphertext.
Step 3 — Client derives the shared secret.
- The client computes ss_{\text{ECDH}} = X25519(\text{client-priv}, \text{server-pub}) — the same 32 bytes the server derived.
- The client decapsulates: ss_{\text{KEM}} = \text{ML-KEM.Decap}(\text{client-Kyber-priv}, \text{ct}) — the same 32 bytes the server derived.
- It combines them: ss_{\text{hybrid}} = \text{HKDF-Extract}(0, ss_{\text{ECDH}} \,\|\, ss_{\text{KEM}}). This is the new TLS
master_secretbase.
Why combine with HKDF: simple concatenation then a one-way hash is the standardised way to mix two secrets and keep the combined strength equal to the stronger of the two. Even if either component is zero-entropy (worst case: one algorithm is fully broken), the other component still dominates the hash output.
Step 4 — Handshake verified. Both sides use ss_{\text{hybrid}} to derive HKDF keys for record encryption (AES-256-GCM, unchanged from pre-PQC TLS) and Finished-message MACs. Server authenticates with a Dilithium-based certificate signed by Cloudflare's ML-DSA-backed CA chain (still in hybrid pilot with RSA); client verifies the chain and either trusts or aborts. Application data flows.
Step 5 — Wire cost. Compared to an X25519-only TLS 1.3 handshake:
- ClientHello: +1152 bytes (Kyber pk)
- ServerHello: +1088 bytes (Kyber ciphertext)
- Certificate verify: +2000 bytes or so, depending on how many certificates in the chain are Dilithium-signed (still in transition — hybrid today)
- Total: ~4 kB extra per handshake, no additional round trips on a warm TCP connection
Result. A single TLS connection whose confidentiality and integrity both survive any attacker who can solve at most one of (X25519 discrete log, ML-KEM-768 LWE). Current deployments assume X25519 will fall to Shor; ML-KEM is the defence. The pattern — hybrid now, drop classical later — is the migration path for all TLS, SSH, VPN, and messaging protocols.
What this shows. Post-quantum migration is not "rip out RSA, install Kyber." It is "add Kyber alongside the existing primitive, wait until PQC confidence is high, then retire the classical half." The wire cost (~2 kB extra) is modest; the implementation cost is high (library rewrites, certificate-chain changes, HSM firmware updates); and the protection starts accruing from the first handshake onwards.
Worked example — Dilithium signatures vs RSA-2048
Example 2: sizes and speeds, RSA-2048 vs ML-DSA-65
Setup. A UPI payment authorisation carries a digital signature from the paying bank's HSM. The signature is verified by NPCI's switch before the transaction reaches the beneficiary bank. Today the signature is RSA-2048 or ECDSA-P256. Tomorrow it will be ML-DSA-65 (Dilithium at the 192-bit classical security level, NIST category 3). Compare the wire cost and performance.
Step 1 — Signature sizes.
| Scheme | Public key | Signature |
|---|---|---|
| RSA-2048 | 272 B | 256 B |
| ECDSA-P256 | 64 B | 72 B |
| ML-DSA-65 (Dilithium) | 1952 B | 3293 B |
| FN-DSA-512 (FALCON) | 897 B | 666 B |
| SLH-DSA-128s (SPHINCS+) | 32 B | 7856 B |
Why Dilithium's signature is 13× RSA's: the security comes from lattice problems, whose instances are larger than integer-factoring instances at the same security level. A Dilithium signature is a short-integer solution vector, encoded with some compression; several kilobytes is the floor for post-quantum lattice signatures with reasonable performance.
Step 2 — Signing speed on a laptop CPU. On a single x86 core with standard libraries (OpenSSL 3.4 with the oqs-provider):
| Scheme | Sign | Verify |
|---|---|---|
| RSA-2048 (sign with CRT) | ~1.5 ms | ~0.04 ms |
| ECDSA-P256 | ~0.07 ms | ~0.25 ms |
| ML-DSA-65 | ~0.3 ms | ~0.1 ms |
| FN-DSA-512 | ~0.5 ms | ~0.05 ms |
| SLH-DSA-128s | ~3 ms | ~3 ms |
Dilithium is faster than RSA-2048 for signing and competitive for verification. FALCON is faster on the verifier side but requires floating-point Gaussian sampling that is difficult to implement without side channels. SPHINCS+ is slowest across the board and has the largest signatures, but its security depends only on hash functions.
Step 3 — UPI impact per transaction.
- Current RSA-2048 signature: 256 bytes added to transaction payload.
- Dilithium signature: 3293 bytes added. Difference: ~3 kB per UPI transaction.
UPI handles ~12 billion transactions per month (2026 numbers). Uplifting signatures from RSA to Dilithium adds ~36 terabytes/month of signature data to the UPI switching layer. This is substantial but manageable: UPI's existing backhaul handles orders of magnitude more. The switching latency budget — typically < 5 seconds end-to-end for a user-visible transaction — comfortably absorbs the Dilithium verify cost (~0.1 ms on commodity hardware; ~10 microseconds on a modern HSM).
Step 4 — Certificate chain impact. X.509 certificates chain up to a CCA (Controller of Certifying Authorities) root. Each certificate in the chain carries the issuing authority's signature. A typical chain is 3–5 certificates. A full PQC chain in Dilithium: 5 × 3293 = ~16 kB of signatures per chain, vs ~1.5 kB for RSA. Browser handshakes that fetch a chain every time (cache-cold) pay this once; subsequent reuses amortise.
Result. Dilithium is a practical, drop-in replacement for RSA in the UPI stack. The signature is 13× bigger but signing and verifying are both at least as fast. The backhaul cost is absorbed by existing infrastructure. The migration-operational challenge is in the HSM firmware (new algorithms, new key formats, new TRNG seeding requirements) and the CA root ceremony (re-rooting the CCA hierarchy with PQC keys) — both solvable, both underway under the National Quantum Mission.
What this shows. The PQC-vs-classical gap in signature size is material but not catastrophic. Dilithium has approximately the same signing speed as RSA and is well within the operating envelope of modern HSMs and NIC hardware. The serious engineering work is in rolling out new algorithms to every certificate, every key ceremony, every firmware image — the long tail of a national cryptographic infrastructure, all of which is in motion under India's migration plan.
India's migration — the National Quantum Mission plan
India's National Quantum Mission (NQM), approved in April 2023 with a ₹6003 crore budget over 8 years, has four technology verticals: quantum computing, quantum communication, quantum sensing, and post-quantum cryptography and materials. The PQC pillar is chaired by Dr. Debdeep Mukhopadhyay's group at IIT Kharagpur and draws on IIT Bombay, IIT Madras, IISc, ISI Kolkata, and CDAC for algorithm and implementation expertise.
The infrastructure targets
- Aadhaar / UIDAI. The UIDAI signs authentication responses with RSA-2048 keys held in HSMs. A lifetime-secrecy biometric dataset for 1.3 billion residents is the largest single PQC-migration target in India. The NQM roadmap targets hybrid RSA+Dilithium authentication signatures by 2028, full Dilithium by 2030, with backward-compatibility bridges through 2032.
- UPI / NPCI. Transaction signatures currently use ECDSA-P256. NPCI has been auditing hybrid ECDSA+Dilithium handshakes with participating banks since 2025; a production pilot is scheduled for 2027. Transaction logs retained for tax audits (10 years) are within the harvest-now window, making the migration genuinely time-sensitive.
- DigiLocker and GSTIN. Document signatures currently rely on the CCA hierarchy, which uses RSA-4096 at the root. CCA has begun evaluating Dilithium-based root-ceremony procedures; the first PQC-signed DigiLocker certificate was issued in late 2025 in a limited pilot.
- Defence and diplomatic communications. The Ministry of External Affairs and Ministry of Defence hold the deepest secrecy horizons — communications must remain confidential for 50–75 years. These are the workloads where harvest-now-decrypt-later pressure is most acute, and they are the first adopters of full PQC (SPHINCS+ for most-conservative use cases, Dilithium for mainline).
- Banking HSMs. The RBI's 2024 cybersecurity framework mandates "crypto-agility" — the architectural property that algorithms can be swapped with a configuration change rather than a code rewrite. All Scheduled Commercial Banks have been directed to achieve crypto-agility by end of 2027; PQC deployment follows.
Indian academic contributions
- IIT Kharagpur (Mukhopadhyay group): side-channel-resistant hardware implementations of Kyber and Dilithium on Indian-fabricated chips. Their 2024 ARM Cortex-M4 optimisation is among the fastest public Dilithium implementations on constrained devices.
- IIT Madras: SeCCI (Secure Computing and Communication Infrastructure) centre works on lattice cryptanalysis and formal verification of PQC implementations.
- IISc Bangalore: lattice-based homomorphic encryption, with national-importance applications in private biometric matching.
- ISI Kolkata: number-theoretic foundations; historical leadership in public-key cryptanalysis.
CERT-In (Computer Emergency Response Team-India), under MeitY, has issued guidance on crypto-agility since 2024 and is preparing a national PQC transition guideline scheduled for publication in 2026, harmonised with NIST FIPS 203/204/205 and the draft 206.
The industry trajectory
- Google Chrome. Started CECPQ1 hybrid experiments in 2016, CECPQ2 in 2018, Kyber+X25519 (CECPQ3 internally, "X25519Kyber768" on the wire) in production as of 2024. A significant fraction of Chrome TLS 1.3 connections today are already hybrid PQC.
- Cloudflare. Rolled hybrid Kyber+X25519 to Cloudflare's edge in 2024; about 35% of Cloudflare-fronted TLS traffic in early 2026 is PQC-hybrid on the key-exchange side.
- Apple iMessage. The "PQ3" protocol, announced February 2024, wraps Kyber into the iMessage end-to-end encryption layer. Updates ratcheting forward every session; once an iMessage session is PQ3-active, even a future quantum adversary cannot decrypt subsequent messages.
- Signal. Published a PQXDH extension in 2023 (Kyber + ECDH) and deployed it to all Signal users through 2024 updates.
- Indian banks. State Bank of India, HDFC, and ICICI have begun internal crypto-agility audits. No production PQC deployment to customer-facing endpoints yet; Q1 2027 is the industry consensus target.
Common confusions
- "PQC uses quantum computers." No. PQC algorithms run on classical hardware — the same phones, laptops, and servers that run RSA today. The name refers to what they resist, not how they compute.
- "The quantum threat is 15 years away, so PQC is not urgent." Harvest-now-decrypt-later inverts this. If your secret must stay secret for 20 years and migration takes 5, you needed to start 5 years ago for secrets encrypted today to be safe at the arrival of the quantum computer. The "15 years" dates the attacker's capability; the secrecy horizon dates your requirement.
- "All PQC is lattice-based." No. NIST standardised one hash-based scheme (SPHINCS+) precisely to have diversity against future lattice attacks. Code-based schemes (Classic McEliece, HQC) remain under evaluation; multivariate-polynomial and isogeny-based schemes have been proposed (most isogeny schemes collapsed after a 2022 attack on SIKE).
- "Lattice problems are quantum." No. The Learning With Errors problem, and lattice problems in general, are classical mathematical objects — problems about integer matrices and arithmetic modulo q. The name "lattice" refers to the geometric structure of integer combinations of basis vectors, not to any quantum mechanism.
- "PQC replaces QKD." They are complementary, not substitutes. PQC is a software upgrade to every device that uses cryptography — the internet-scale answer. QKD is a physical-layer protocol requiring dedicated hardware and a quantum channel — valuable for specialised links (inter-data-centre, government backbones) where the hardware investment is worth the physical-layer guarantee. Sensible architectures use both.
- "If I use AES-256 and symmetric keys, I do not need PQC." Symmetric-key cryptography is merely weakened by Grover (square-root speedup), not broken. But how did you get a shared symmetric key? Usually via an RSA or ECDH key exchange, which Shor breaks. Without PQC key exchange, your shared AES key is extractable from the handshake, and the AES-256 session is then trivially decryptable.
Going deeper
If you understand that PQC is classical software resisting a quantum attacker, that NIST finalised four standards (Kyber, Dilithium, FALCON, SPHINCS+) between 2022 and 2024, that harvest-now-decrypt-later forces immediate migration, and that India's National Quantum Mission has scheduled Aadhaar and UPI transitions through 2028–2030 — you have chapter 158. The material below is for readers who want the formal LWE definition, Kyber's concrete parameter sets, SPHINCS+'s hash-tree construction, and the detailed CERT-In regulatory framework.
Learning With Errors — formal definition
The decisional LWE problem in dimension n over modulus q with error distribution \chi: distinguish, with noticeable advantage, between samples (A, A \mathbf{s} + \mathbf{e}) \in \mathbb{Z}_q^{m \times n} \times \mathbb{Z}_q^m for uniformly random A, small secret \mathbf{s}, small error \mathbf{e} \leftarrow \chi^m — and uniformly random samples (A, \mathbf{u}) \in \mathbb{Z}_q^{m \times n} \times \mathbb{Z}_q^m.
Regev (2005) proved that solving LWE is at least as hard as approximating GapSVP and SIVP — worst-case lattice problems believed hard both classically and quantumly. The reduction is remarkable: an algorithm that solves random LWE instances can be used to solve the worst case of an underlying lattice problem. This is the kind of worst-case-to-average-case equivalence that is usually unavailable in cryptography; it is one reason lattice assumptions are widely trusted.
Kyber uses the Module-LWE variant — LWE with structured matrices that admit efficient polynomial-multiplication operations via number-theoretic transforms. This reduces key sizes from O(n^2) to O(n \log n) at a believed-small security cost.
Kyber parameter sets
NIST ML-KEM specifies three parameter sets:
| Set | n | q | Security (classical / quantum) | pk size | ct size |
|---|---|---|---|---|---|
| ML-KEM-512 | 256 | 3329 | 128 / 118 bits | 800 B | 768 B |
| ML-KEM-768 | 256 | 3329 | 192 / 184 bits | 1184 B | 1088 B |
| ML-KEM-1024 | 256 | 3329 | 256 / 256 bits | 1568 B | 1568 B |
Module rank k \in \{2, 3, 4\} sets the level. Most deployments use ML-KEM-768, which is NIST "category 3" — security broadly comparable to AES-192. Speed is essentially constant across parameter sets: the dominant operation is polynomial multiplication in \mathbb{Z}_{3329}[x]/(x^{256}+1), implemented with an NTT at a few microseconds per multiply.
SPHINCS+ — how hash-based signatures work
SPHINCS+ is a stateless hash-based signature, building on the Merkle tree idea. The public key is the root of a tree of hash chains. A signature for message m is:
- Hash m to an index i and a message digest d.
- Use a chain of few-time signatures (FORS) at position i of the bottom layer, signing d.
- Authenticate the FORS public key up through a stack of XMSS hyper-tree layers using Merkle authentication paths.
The signature is a sequence of hash-chain values and Merkle paths — hence the 7–50 kB signature sizes. Verification traverses the same tree, hashing upward until the root is reached; the root must match the public key.
Why SPHINCS+ is stateless: the index i is chosen pseudorandomly from the message and a per-signature randomiser, not from a counter the signer must remember. Stateful hash signatures (XMSS, LMS) require the signer to track "which one-time key did I use last?" — a nightmare for HSMs that might be duplicated or rolled back. SPHINCS+ trades that statefulness for larger signatures, and is the only NIST PQC signature that is fully stateless and hash-only.
Security rests only on the second-preimage resistance of the underlying hash function (SHA-2 or SHAKE). No number-theoretic or lattice assumption is involved. This is the "conservative backup" property: if lattice cryptography collapses in 2032 due to some cryptanalytic breakthrough, SPHINCS+ is still secure as long as SHA-256 holds (and if SHA-256 falls, roughly every cryptographic system on Earth falls together — SPHINCS+'s failure mode is correlated with the end of practical cryptography anyway).
CERT-In's crypto-agility framework
CERT-In's draft PQC transition guideline (2026) specifies five pillars:
- Algorithmic inventory. Organisations must enumerate every cryptographic primitive in use — TLS cipher suites, VPN protocols, signing keys, database encryption, backup encryption, code-signing.
- Crypto-agility. Every primitive must be abstracted behind an interface that permits runtime or config-time algorithm switching. Hard-coded algorithm choices are disallowed in new deployments.
- Hybrid deployment. Through 2028, new deployments must use hybrid classical+PQC algorithms (Kyber+X25519, Dilithium+RSA, etc.) rather than pure PQC, to hedge against PQC cryptanalytic surprises.
- Migration timeline. Critical national infrastructure (banking, telecom, power grids, Aadhaar) must achieve hybrid deployment by end of 2028 and full PQC by 2030. General government IT by 2032.
- Audit and certification. The Bureau of Indian Standards (BIS) is preparing IS standards aligned with NIST FIPS 203/204/205 for procurement; all new government cryptographic procurements after 2027 must cite these standards.
The framework is more aggressive than the U.S. NSA's CNSA 2.0 timeline (which targets 2035 for critical systems) because India's harvest-now-decrypt-later exposure is asymmetric: the data is uniquely sensitive (Aadhaar biometrics), the legal mechanisms for foreign-storage oversight are limited, and the strategic motivation for adversaries is acute.
The lattice-security debate
A persistent concern is that lattice security is less studied than integer factorisation. Factoring has been a serious research target since before World War II; lattice problems are a newer and thinner literature. The best classical attack — BKZ sieving with heuristic estimates — has been repeatedly refined, and the gap between conservative and optimistic parameter estimates is a meaningful source of disagreement. Kyber's parameters are chosen to resist BKZ with very large block sizes, giving significant margin; but if a fundamentally new lattice-sieving technique emerges, the parameter sets might need to grow. NIST's 2024 decision to also standardise HQC (code-based) as a Kyber backup is precisely hedging against this risk.
Why isogeny schemes collapsed
SIKE (Supersingular Isogeny Key Encapsulation) was a NIST round-4 finalist until 2022, when Castryck and Decru published a polynomial-time classical attack using techniques from arithmetic geometry. SIKE was eliminated overnight. The lesson: "nobody knows a quantum algorithm" is not the same as "nobody knows a classical algorithm we have not yet discovered." NIST's conservative portfolio choice — lattice + hash, with code as backup — reflects this humility. Cryptography requires decades of cryptanalysis to build confidence; the 2016–2024 competition is only the beginning of PQC's true stress-test.
Where this leads next
- The Quantum Threat Model — the bigger map of which primitives break, which bend, and why Shor is qualitatively different.
- Factoring and RSA — the structural weakness that motivates the entire PQC migration.
- Lattice-Based Cryptography — a full treatment of LWE, Module-LWE, and how Kyber and Dilithium use them.
- BB84 Protocol — the complementary physical-layer answer to post-quantum key distribution.
- Resource Estimates for RSA-2048 — how many qubits, how many gates, how long to actually break RSA once the hardware exists.
References
- NIST, Post-Quantum Cryptography Project, including FIPS 203/204/205 — csrc.nist.gov/projects/post-quantum-cryptography.
- CRYSTALS-Kyber team, Kyber specification (round 3, 2021) — pq-crystals.org/kyber.
- CRYSTALS-Dilithium team, Dilithium specification (round 3, 2021) — pq-crystals.org/dilithium.
- Michele Mosca, Cybersecurity in an era with quantum computers: will we be ready? (2015) — eprint.iacr.org/2015/1075.
- Wikipedia, Post-quantum cryptography.
- Apple Security Research, iMessage with PQ3: The new state of the art in quantum-secure messaging at scale (2024).