In short

Code-based cryptography — invented by Robert McEliece in 1978 — rests on the hardness of decoding a general linear error-correcting code. Forty-eight years of public cryptanalysis have failed to break it. Its one flaw: public keys of hundreds of kilobytes to a megabyte, orders of magnitude larger than RSA or Kyber. Classic McEliece is a NIST Round 4 finalist for conservative applications where key size is acceptable; HQC (another code-based KEM) was selected by NIST in March 2025 as a backup to ML-KEM. Isogeny-based cryptography, by contrast, promised tiny keys and elegant mathematics — until July 2022, when Wouter Castryck and Thomas Decru published a purely classical attack that broke SIKE, a NIST Round 4 finalist, on a single laptop in about an hour. The Castryck-Decru paper killed the scheme overnight and eliminated an entire branch of post-quantum candidates from NIST. One isogeny variant, CSIDH, uses a different group structure and survives — but runs thousands of times slower than a lattice KEM, so it occupies a niche rather than a mainstream slot. The lesson of 2022 echoes through every PQC discussion: schemes need decades of cryptanalysis to build confidence, elegant new mathematics can collapse in an afternoon, and a conservative portfolio with mathematical diversity is worth more than a single beautiful algorithm. NIST's current portfolio — lattices (Kyber, Dilithium, FALCON) for mainline, SPHINCS+ for hash-based insurance, HQC and Classic McEliece for code-based diversity — is shaped directly by what happened to SIKE.

You have read the post-quantum chapter. You know that NIST standardised ML-KEM (Kyber) for key exchange and three signature schemes, three of which rest on lattices. You have probably asked the obvious question: if lattices are so good, why did NIST run a competition with seventy-two submissions for eight years? The answer is that NIST wanted a portfolio. A single cryptographic family — however trusted — is a single point of failure. If somebody finds a fast lattice-sieving algorithm in 2032, every NIST lattice standard fails together, and the internet's encryption falls in a day. So the competition reached for diversity: hash functions as a conservative backup, codes as an even older backup, isogenies as a promising new direction.

This chapter is about the two families that did not make it to the main stage — code-based cryptography, which did not win because its keys are too large, and isogeny-based cryptography, which did not win because one of its flagship schemes was catastrophically broken in July 2022. Together they tell you two stories the lattice chapter could not. The first is that the oldest public-key cryptosystem still believed secure is a code-based one from 1978, which has outlived RSA in conceptual age and will probably outlive it in deployed age. The second is what happens when an elegant, young cryptographic scheme gets attacked in the open — how fast confidence can collapse when the underlying assumption turns out to admit a polynomial-time classical algorithm nobody had found yet.

Code-based cryptography — the oldest surviving public-key idea

RSA was published in 1977. One year later, Robert McEliece — then at the Jet Propulsion Laboratory — proposed a different public-key scheme based on a completely different hard problem: decoding a random linear error-correcting code. Factoring looked powerful; coding theory looked old-fashioned. Almost nobody paid attention to McEliece's scheme for two decades. Meanwhile, RSA took over the internet.

Now it is 2026. Shor's algorithm breaks RSA in polynomial time on a future quantum computer. Classical McEliece, as the scheme has come to be called, is not broken by Shor, Grover, or any known algorithm. It has survived forty-eight years of open cryptanalysis without a structural weakness. Every post-quantum standardisation body that meets today takes it seriously.

The scheme — a picture before the formalism

Forget the algebra for a moment. Here is what Alice does if she wants to receive messages from Bob using McEliece.

  1. Alice picks a particular error-correcting code \mathcal{C} — call it the Goppa code — that she knows how to decode efficiently. This is her secret.
  2. She scrambles the code. She picks a random permutation of the code's columns and a random invertible linear map applied on the left, and applies both. The scrambled code is mathematically equivalent to the original (same error-correcting power) but looks like a random linear code.
  3. The scrambled code — specifically, its generator matrix G' = SGP — is Alice's public key. She publishes it.
  4. To encrypt a message m for Alice, Bob treats m as a codeword input, multiplies by G' to produce a valid (scrambled) codeword, and then adds a random error vector e of weight exactly t — that is, exactly t bit-flips in random positions.
  5. The ciphertext is c = mG' + e. Bob sends it.
  6. Alice undoes the permutation, decodes \mathcal{C} (which she can do because she picked a decodable code), then un-scrambles the left-multiplication. She recovers m.

Anyone intercepting c without the secret sees a random-looking linear code with a random error pattern added. Decoding a general linear code with t errors is NP-hard (Berlekamp-McEliece-van Tilborg, 1978) and is believed to remain hard for quantum computers.

McEliece encryption and decryption flowA horizontal flow diagram. On the left a box labelled Bob holds a plaintext message m. An arrow labelled encrypt multiplies m by the public generator matrix G prime to produce m G prime, then adds a random error vector e of weight t to produce the ciphertext c. The ciphertext travels across a middle region labelled public channel to Alice on the right. Alice applies the inverse permutation P inverse, decodes the Goppa code which strips the error, and applies S inverse to recover m. A side note explains that without the private S, G, P Alice's secret decoding trapdoor the only option is general random-code decoding, which is NP-hard.McEliece encryption and decryptionBobplaintext mencryptc = mG' + ee has exactly t errorsAliceprivate S, G, PAlice decrypts: c P⁻¹ → Goppa-decode (strips e) → S⁻¹ → mshe knows the code structure, so decoding t errors is easy for herAttacker sees c. Without S, G, P — the code looks random.Decoding a random linear code with t errors is NP-hard (Berlekamp 1978).
The trapdoor in McEliece is not a number-theoretic structure like factoring — it is a hidden decoding algorithm. Alice knows the code is a Goppa code (a specific algebraic code with a fast decoder); the attacker only sees a scrambled generator matrix that looks random. Decoding a random code is hard; decoding a Goppa code with the right parameters is easy; the two must be indistinguishable to the attacker for the scheme to work. Forty-eight years of cryptanalysis have not shown otherwise.

The trapdoor — what makes Goppa codes special

The choice of code matters intensely. Alice needs a code that:

The original McEliece choice was binary Goppa codes — a family due to V. D. Goppa (1970) with an efficient decoder (Patterson's algorithm, quadratic time in the code length) and an algebraic structure that has proven resistant to distinguishing attacks for decades. Alternative codes — Reed-Solomon, Reed-Muller, LDPC, MDPC, algebraic geometry codes — have all been tried; several have been broken (e.g. Sidelnikov-Shestakov 1992 broke the Niederreiter scheme using Reed-Solomon codes). Classic McEliece, the current NIST submission, stays with binary Goppa because they are the ones with the longest track record.

Why Goppa codes survive when Reed-Solomon doesn't: a Reed-Solomon code has such a rigid algebraic structure that even after scrambling, attackers found ways to reverse-engineer the underlying polynomial structure from the public generator matrix. Binary Goppa codes are defined over a binary extension field with a more complex support set, and after scrambling their generator matrices have withstood every known distinguishing attack. The resistance is empirical rather than proved — the scheme rests on this empirical confidence.

The key-size problem

Here is the bad news. A McEliece public key is a k \times n generator matrix over \mathbb{F}_2, where n is the code length and k is its dimension. For the Classic McEliece parameter set mceliece6960119 (128-bit post-quantum security):

For mceliece8192128 (highest level):

Compare this to:

McEliece's public key is roughly 1000× larger than Kyber's and four orders of magnitude larger than ECC. For a TLS handshake where the public key is transmitted on every connection, this is a dealbreaker. For a long-lived certificate stored once and reused for a decade, it is acceptable. For an HSM that stores ten public keys total, it is fine.

Ciphertext size is modest — around 200 bytes — and encryption/decryption are fast (microseconds on modern hardware once the key is loaded). The pathology is specifically the public key.

HQC — a code-based KEM with smaller keys

In March 2025 NIST selected HQC (Hamming Quasi-Cyclic) as a fifth standard — a code-based KEM as backup to Kyber. HQC uses quasi-cyclic codes (structured codes with rotational symmetry) to get public keys of about 2.2-7.2 kB and ciphertexts about 4.5-14 kB — larger than Kyber but a thousand times smaller than Classic McEliece. Security reduction is to the Syndrome Decoding Problem on quasi-cyclic codes (QCSD), a narrower assumption than general decoding but one that has held up for over a decade.

HQC's value in the portfolio is precisely the same as SPHINCS+'s: diversity. If a future attack structure-breaks lattice cryptography, HQC gives NIST and its users a KEM to fall back to whose hardness assumption is unrelated to lattices.

Isogeny-based cryptography — and the 2022 collapse

The second family is more dramatic. In 2011, David Jao and Luca De Feo proposed a key-exchange protocol called SIDH — Supersingular Isogeny Diffie-Hellman — based on a construction using isogenies between supersingular elliptic curves. For a decade it was the rising star of post-quantum cryptography: tiny public keys (hundreds of bytes), elegant mathematics from number theory, strong academic enthusiasm. Its KEM variant SIKE became a Round 4 finalist at NIST.

Then, on 30 July 2022, Wouter Castryck and Thomas Decru of KU Leuven posted a paper titled An efficient key recovery attack on SIDH. It broke SIKE — the Round 4 NIST finalist — in about an hour, on a single laptop, using only classical computation. The scheme was retired from NIST within weeks. Its entire branch of the post-quantum tree pruned itself.

Isogenies — the gentlest possible sketch

An elliptic curve is the set of points (x, y) satisfying an equation like y^2 = x^3 + ax + b over some field. You already meet elliptic curves in ECDSA and ECDH, the classical signature and key-exchange schemes that protect most of today's internet.

An isogeny \phi : E_1 \to E_2 between two elliptic curves is a map that (a) sends points of E_1 to points of E_2, (b) respects the group structure on both curves, and (c) is a rational function (a ratio of polynomials). You can think of an isogeny as a "structure-preserving morphism" between two curves — and interestingly, there can be many different isogenies from E_1 to curves of the same size.

The isogeny graph — curves as vertices, isogenies as edgesA graph with six vertices arranged in a ring. Each vertex is labelled with an elliptic curve E zero through E five. Edges connect neighbours and cross the ring, each edge labelled with a small integer prime l. A coloured path of four edges traces a route from E zero to E three. The caption explains that SIDH's secret is a walk in such a graph and the public key is the endpoint.Isogeny graph of supersingular elliptic curvesE₀E₁E₂E₃E₄E₅Alice's secret is a walk E₀ → E₁ → E₂ → E₃; the public key is the endpoint E₃.
The isogeny graph has one vertex per supersingular elliptic curve (up to isomorphism) and an edge for every prime-degree isogeny between two curves. The graph is an **expander graph** — any two vertices are a short walk apart, and random walks spread out fast. SIDH's trapdoor is a secret walk through this graph; the public key is the curve you end up at. Inverting the walk (finding a path back) was believed hard — the best known classical algorithms ran in exponential time, the best quantum algorithms in sub-exponential. Castryck and Decru broke this belief in 2022.

What SIDH actually did

In SIDH, Alice and Bob each pick a secret short walk in an isogeny graph. They exchange the endpoints of their walks (public keys) along with some auxiliary information (specifically, the images of certain torsion points). Using each other's endpoints and the auxiliary data, each recovers a common curve at the "intersection" of their walks. That shared endpoint's j-invariant becomes their shared secret.

The key sizes were beautiful:

Compare to lattice KEMs (1-2 kB) and code-based KEMs (kilobytes to megabytes). SIKE's keys were the smallest of any serious PQC candidate. For bandwidth-constrained applications — IoT, low-power radios, space — this was hugely attractive.

The security rested on the belief that inverting the SIDH walk was computationally hard. That belief held from 2011 to 2022.

The Castryck-Decru break

On 30 July 2022, Wouter Castryck and Thomas Decru at KU Leuven published a paper titled An efficient key recovery attack on SIDH on the IACR eprint archive. Within two weeks their attack code could recover SIKEp434 private keys in 62 minutes on a single laptop core, SIKEp503 in 2 hours, SIKEp751 in 21 hours. The attack was purely classical — no quantum computer required. Microsoft's SIKE Cryptographic Challenge offered cash prizes for breaking the scheme at various levels; all were claimed within days.

What went wrong? The attack exploited the auxiliary torsion-point information that SIDH had always published. In SIDH, Alice's public key includes not just the endpoint curve E_A but also the images \phi_A(P_B) and \phi_A(Q_B) of Bob's basis points under Alice's secret isogeny \phi_A. This auxiliary data was needed to enable the shared-secret computation.

Castryck and Decru observed that the torsion-point images, taken together with a deep theorem due to Kani (1997) on isogenies between abelian surfaces (products of elliptic curves), let one recover Alice's secret isogeny in polynomial time. The technical core of the attack is a construction that uses Alice's published torsion data to build a gluing of two elliptic curves into an abelian surface, then decomposes the surface in a way that reveals the secret isogeny's degree decomposition.

The underlying isogeny problem without torsion-point information — the pure isogeny path problem — remains believed hard. But SIDH's specific design, in which the torsion images had to be published, was broken. SIKE was dead.

SIKE timeline, 2011 to 2022A horizontal timeline from 2011 on the left to 2022 on the right. Five labelled events along the top. 2011 SIDH proposed by Jao and De Feo. 2017 SIKE submitted to NIST PQC round one. 2020 SIKE advanced to round three. 2022 January SIKE selected for round four. 2022 July Castryck and Decru published break. A red vertical line at 2022 July highlights the break. The line below shows SIKE status alive from 2011 to 2022 July, then dead. Labels show the attack time of 62 minutes for SIKEp434 and 21 hours for SIKEp751.SIKE's rise and fall, 2011–202220112014201720202022SIDH / SIKE alive — believed securedeadSIDH proposedJao–De FeoNIST round 1NIST round 3Jan 2022: round 4 finalistJul 2022Castryck-Decru62 min on a laptopeleven years of growing confidence, overturned by one paper
The timeline that haunts every post-quantum crypto discussion. SIKE was not a fringe scheme — it had eleven years of scrutiny, a dedicated international research community, and the blessing of NIST's multi-round evaluation. And still: one attack, one afternoon, a scheme eliminated. The episode is the reason the NIST portfolio values diversity and why newer schemes need decades, not years, of cryptanalysis before being trusted at scale.

CSIDH — the isogeny survivor

Not all isogeny-based cryptography died in July 2022. A different scheme, CSIDH (Commutative Supersingular Isogeny Diffie-Hellman, pronounced "sea-side"), proposed by Castryck-Lange-Martindale-Panny-Renes in 2018, uses a different kind of isogeny graph — one based on the action of a commutative class group on supersingular curves defined over \mathbb{F}_p rather than \mathbb{F}_{p^2}. CSIDH does not publish torsion-point information, so the Castryck-Decru attack does not apply.

CSIDH survives, but with a price: it is thousands of times slower than Kyber. A CSIDH-512 key exchange takes about 40 milliseconds of CPU time per operation; a Kyber-768 key exchange takes under 100 microseconds. For most mainstream TLS, 40 ms per connection is a showstopper. CSIDH remains interesting for very-specific applications where tiny keys matter more than per-handshake latency — or where certain protocol features (non-interactive key exchange in the random oracle model) are needed that lattice-based schemes don't naturally provide.

CSIDH is also still under cryptanalytic scrutiny. The best known quantum attack is Kuperberg's subexponential algorithm for the hidden shift problem (e^{O(\sqrt{\log p})} time), meaning CSIDH-512 provides lower quantum security than its classical level suggests. Getting to NIST category 1 security quantumly requires CSIDH-2048 or larger, pushing key-exchange time well over 100 ms. CSIDH lives; it thrives only in niches.

Worked example — McEliece parameters

Example 1: mceliece6960119 — what the parameters buy you

Setup. The Classic McEliece NIST submission defines several parameter sets. Take mceliece6960119: a binary Goppa code with length n = 6960, dimension k = 5413, error-correction capacity t = 119. What does this give?

Step 1 — Security. Decoding a random linear code with t errors has information-set-decoding (ISD) cost that grows roughly as \binom{n}{t} / \binom{n-k}{t} operations, with various speedups (Stern, BJMM, MMT) shaving logarithmic factors. Against the best known classical and quantum ISD algorithms, mceliece6960119 provides approximately 128 bits of post-quantum security — the NIST category 1 target.

Why t = 119 specifically: t is the code's error-correction capacity, set just below the Goppa code's decoding limit. Pushing t higher would make encryption harder (more errors to add) and would also make ISD easier; pushing t lower would weaken security. The NIST submission picks t at exactly the value where decoding-capability and ISD-resistance balance at the target security level.

Step 2 — Public-key size. The public key is the systematic generator matrix's non-trivial part: k \times (n-k) bits = 5413 \times 1547 = 8{,}373{,}911 bits = 1{,}046{,}739 bytes \approx 1.05 MB.

Step 3 — Ciphertext size. A ciphertext is c \in \mathbb{F}_2^n of length n = 6960 bits \approx 870 bytes. (Small! The ciphertext is genuinely compact.)

Step 4 — Private-key size. The private key stores the Goppa polynomial, the support, and the scrambling matrices: roughly 14 kB total. Modest.

Step 5 — Performance on modern hardware. On an x86 core with the liboqs reference implementation:

  • Key generation: ~100 ms (one-time, per key)
  • Encapsulate: ~50 μs
  • Decapsulate: ~150 μs

Encapsulate/decapsulate are fast; keygen is slow. Compare Kyber-768: ~40 μs keygen, ~30 μs encap, ~20 μs decap. Classic McEliece is slower on keygen by a factor of roughly 2500×.

Step 6 — What this is good for. A web server loading a 1 MB public key on every TLS handshake is a terrible idea. A code-signing authority that stores one master public key and uses it to verify billions of signatures over a decade is a perfect use case. An HSM in a bank's core with a handful of long-lived keys is also fine. mceliece6960119 is not a drop-in replacement for X25519; it is a specialised tool for specialised pipes.

Result. A PQC system with 128-bit post-quantum security, 1.05 MB public keys, 870-byte ciphertexts, fast encap/decap, and forty-eight years of unbroken cryptanalysis. The NIST Round 4 ongoing evaluation leans toward including it as a "conservative option where bandwidth is unconstrained" — a place in the portfolio rather than a mainstream slot.

Worked example — why SIKE's torsion-point leak was fatal

Example 2: the Castryck-Decru attack in outline

Setup. SIKEp434, the Round 4 finalist, uses supersingular elliptic curves over \mathbb{F}_{p^2} where p = 2^{216} \cdot 3^{137} - 1. Alice picks a secret isogeny \phi_A : E_0 \to E_A of degree 2^{216}. Bob picks \phi_B : E_0 \to E_B of degree 3^{137}. Alice's public key is (E_A, \phi_A(P_B), \phi_A(Q_B)) — the endpoint curve and the images of Bob's basis points P_B, Q_B of order 3^{137} on E_0 under Alice's secret isogeny.

The attacker's task. Recover \phi_A (equivalently, recover Alice's private key) given (E_A, \phi_A(P_B), \phi_A(Q_B)).

Step 1 — Kani's theorem. Ernst Kani proved in 1997 a theorem about pairs of isogenies (\phi, \psi) between elliptic curves: if \deg \phi + \deg \psi = N for a specific N, then \phi and \psi glue into a single isogeny between abelian surfaces (two-dimensional generalisations of elliptic curves), and the glued isogeny has a predictable kernel structure.

Step 2 — Guessing a partial walk. Castryck and Decru's algorithm first guesses that Alice's secret isogeny can be written as \phi_A = \phi_2 \circ \phi_1, where \phi_1 is a very short starting segment (small degree 2^a for some small a). They enumerate all possible \phi_1 — there are only a few thousand.

Step 3 — Building an abelian surface. For each guessed \phi_1, they construct an auxiliary isogeny \psi on E_0 of degree 2^{216-a} using Bob's points P_B, Q_B and the published images \phi_A(P_B), \phi_A(Q_B). Kani's theorem says \phi_A \circ \phi_1^{-1} and \psi should glue into an isogeny of abelian surfaces with a specific kernel.

Step 4 — Checking consistency. For the correct guess \phi_1, the glued isogeny decomposes cleanly — the (2,2)-isogeny algorithm on abelian surfaces runs in polynomial time and outputs a consistent decomposition. For wrong guesses, the decomposition fails. So the attacker enumerates \phi_1, runs the abelian-surface algorithm, and wins when a decomposition succeeds.

Step 5 — Running time. Total work: O(2^a \times \text{polynomial}). For SIKEp434, a can be chosen small enough that the whole enumeration runs in about an hour on a single laptop core. For SIKEp751, about 21 hours. No quantum computer required.

Why this was invisible to eleven years of cryptanalysis: nobody had put the pieces together. Kani's theorem dates to 1997. SIDH was proposed in 2011. The attacker's chain — "use the published torsion-point images to build an abelian surface, then use Kani to decompose it" — required a specific imaginative leap that nobody made until Castryck and Decru. The theorem, the scheme, and the attack idea existed in different communities; nobody bridged them.

What this shows. Elegant schemes with published auxiliary data are dangerous. The torsion-point images were the "hole in the fence" — information the protocol had to publish for correctness, which an attacker eventually learned to turn into a winning move. Modern post-quantum design increasingly prefers schemes where auxiliary data is minimised (or proven to not leak), and the 2022 break is cited in nearly every subsequent PQC security analysis as the cautionary example.

The Castryck-Decru attack at a glanceA diagram with four boxes connected by arrows. Box one Alice's public key contains E A and the torsion images phi A of P B and phi A of Q B. Box two Kani's 1997 theorem says isogenies glue into abelian surfaces. Box three Enumerate short starting walks phi one. Box four Try to glue and decompose, success reveals phi A. An arrow from box one and box two both go into box four. The caption explains the eleven-year gap between Kani's theorem and its use as a weapon.Castryck-Decru's attack on SIDH/SIKESIKE public key(E_A, φ_A(P_B), φ_A(Q_B))torsion-point images publishedKani's 1997 theoremisogeny pairs glue intoabelian-surface isogeniesEnumerate short φ₁glue via Kani; decompose.Correct guess → reveals φ_A.total runtime on a laptop: ~62 minutes for SIKEp434, ~21 hours for SIKEp751
The attack's architecture in three boxes. SIKE's published data gives the attacker enough information to invoke Kani's theorem, which turns the isogeny-recovery problem into a tractable decomposition problem on abelian surfaces. The attacker enumerates a small space of candidate starting walks, runs the Kani-style decomposition on each, and wins in polynomial time. The attack's classical nature — no quantum computer — is the twist that made 2022 memorable.

What survives — NIST's post-2022 portfolio

The NIST PQC portfolio as of 2025-2026:

The architecture is mathematical diversity as insurance. If lattices fall, HQC and Classic McEliece stay. If both lattices and codes fall, SPHINCS+ stays. If the hash assumption falls too, approximately every cryptographic system in daily use fails simultaneously and the crisis is bigger than PQC.

What India's NQM and CERT-In are tracking

India's post-quantum transition planning recognises the same diversity imperative. CERT-In's draft 2026 PQC guidance lists ML-KEM + ML-DSA as the mainstream targets for banking, Aadhaar, and UPI; notes SPHINCS+ for conservative long-horizon signatures in defence and diplomatic communications; and flags HQC and Classic McEliece as alternatives to track for niche high-assurance applications. IIT Madras's SeCCI centre publishes periodic cryptanalysis updates on lattice sieving algorithms, and IIT Kharagpur's Mukhopadhyay group has published side-channel evaluations of Classic McEliece on ARM Cortex-M4 implementations (2024) as part of the NQM's implementation-security pillar. The research community in India is small but genuinely active in keeping multiple PQC families alive.

Common confusions

Going deeper

If you understand that code-based cryptography (Classic McEliece) is the oldest unbroken public-key scheme — forty-eight years old with 1-MB keys — that NIST added HQC in 2025 as a code-based KEM backup, that isogeny-based SIKE was broken in 2022 by Castryck-Decru using classical computation and Kani's theorem, and that CSIDH survives but runs thousands of times slower than Kyber — you have chapter 161. The material below is for readers who want the McEliece decoding strategy (Patterson's algorithm), more on the structure of the 2022 break, HQC's quasi-cyclic codes, CSIDH's class-group action, and the deployment implications.

McEliece decoding — Patterson's algorithm in outline

Alice's decoding of a Goppa code uses Patterson's algorithm (N. Patterson, 1975), which runs in O(nt^2) bit operations for code length n and error capacity t. The rough steps, for a binary Goppa code with Goppa polynomial g(z) over \mathbb{F}_{2^m}:

  1. Compute the syndrome s(z) = \sum c_i / (z - \alpha_i) modulo g(z), where c_i is the received codeword's i-th bit and \alpha_i is the support element.
  2. Solve \sigma(z)^2 \equiv z \cdot (s(z)^{-1} + z) \pmod{g(z)} using the extended Euclidean algorithm on (g(z), s(z)^{-1} + z), recovering an error-locator polynomial.
  3. The roots of the error-locator polynomial identify the error positions; flip those bits.

Without knowledge of the Goppa polynomial g(z) and the support \alpha_1, \ldots, \alpha_n, this is infeasible — and those are exactly what Alice's private key holds and the public key's scrambling hides.

Information-set decoding — the best generic attack

The best known generic algorithm for decoding a random linear code — used as the yardstick for McEliece security — is Information-Set Decoding (ISD). The basic idea (Prange 1962, improved by Stern 1989, BJMM 2012, MMT 2013): randomly pick k coordinates as an information set; assume they are error-free; invert the restricted generator matrix to recover the candidate message; check if the resulting error has weight \le t. Success probability is about \binom{n-t}{k} / \binom{n}{k} per attempt; expected runtime scales as \binom{n}{t} / \binom{n-k}{t} with various refinements. Against mceliece6960119, the best classical ISD runs in approximately 2^{140} operations; Grover-style quantum speedups halve this to 2^{70} quantum operations — still infeasible.

HQC's structure

HQC uses quasi-cyclic codes over \mathbb{F}_2 — codes whose parity-check matrix is composed of circulant blocks. The quasi-cyclic structure means the generator matrix is described by a single row (plus its cyclic shifts), reducing key size from the megabyte scale of Classic McEliece to the kilobyte scale. The underlying hard problem is Quasi-Cyclic Syndrome Decoding (QCSD). Security rests on QCSD being hard; the conservative concern is that quasi-cyclic codes, being more structured, might admit attacks that random codes do not — but after more than a decade of cryptanalysis, no such attack has been found at the parameter levels HQC uses.

CSIDH's class-group action

CSIDH operates in a commutative setting: the class group of an imaginary quadratic order acts on the set of supersingular elliptic curves defined over \mathbb{F}_p (rather than \mathbb{F}_{p^2} as in SIDH). A secret is an element of the class group; applying it to a public base curve gives a public key, and two parties can combine their secrets in either order to reach the same shared curve — just like classical Diffie-Hellman in an abelian group. No torsion-point information is published.

The best quantum attack is Kuperberg's subexponential algorithm for the dihedral hidden shift problem. CSIDH-512 provides only about 2^{58} quantum security; reaching NIST level 1 (2^{128} quantum) requires CSIDH-2048 or larger, which multiplies the per-operation cost by tens. CSIDH's niche is non-interactive key exchange for applications where Kyber's interactive structure is inconvenient — a real but small market.

The deployment story

Regulatory and procurement implications

The U.S. Department of Commerce's Bureau of Industry and Security and NIST's CSRC both list NIST-standardised PQC as the baseline for federal procurement. The European Cybersecurity Agency (ENISA) takes a similar view. India's CERT-In draft 2026 guidance tracks NIST's portfolio closely but adds the explicit requirement that any critical national infrastructure deployment must use a hybrid (classical + PQC) scheme through 2028 and must have a documented migration path to any HQC-adjacent or Classic-McEliece-adjacent alternative in case of future lattice breaks. The diversity imperative is written into the procurement language.

The cryptanalysis community's culture of open review

The 2022 SIKE break was possible only because SIKE's specification, implementations, and parameter sets were fully public. Castryck and Decru published their code along with the paper. Microsoft offered cash prizes to anyone breaking SIKE at various levels; all the prizes were claimed. The community's norm of open cryptanalysis — publish everything, attack everything, trust only what survives — is what discovered the weakness in time to prevent it from being deployed at scale. The lesson generalises: post-quantum confidence requires public code, public datasets, cash incentives for breaks, and international cryptanalytic teams. Black-box or proprietary PQC schemes should be viewed with significantly more suspicion than open ones.

Where this leads next

References

  1. Robert J. McEliece, A public-key cryptosystem based on algebraic coding theory (1978) — NASA JPL DSN Progress Report 42-44.
  2. Wouter Castryck and Thomas Decru, An efficient key recovery attack on SIDH (2022) — eprint.iacr.org/2022/975.
  3. Classic McEliece team, Classic McEliece: conservative code-based cryptographyclassic.mceliece.org.
  4. Wikipedia, McEliece cryptosystem.
  5. Wikipedia, Supersingular isogeny key exchange.
  6. NIST, Post-Quantum Cryptography Round 4 submissionscsrc.nist.gov/projects/post-quantum-cryptography/round-4-submissions.