In short

RSA is the cryptosystem that holds most of the internet together. Its security rests on one specific assumption: given a large integer N = pq (the product of two primes), nobody knows an efficient algorithm to recover p and q. The best classical algorithm — the General Number Field Sieve — takes sub-exponential time, which is why the standard key size is 2048 bits: at that size, factoring costs around 2^{112} operations, decades on the largest supercomputer. Shor's algorithm (chapters 75–77) factors in polynomial time on a fault-tolerant quantum computer, turning that decades-long classical cost into hours. The honest timeline: today's machines factor 15 = 3 \times 5; breaking RSA-2048 is estimated to need roughly 20 million physical qubits under surface-code error correction, a decade or two away. The policy response, already underway, is post-quantum cryptography — lattice-based and hash-based schemes that even Shor cannot break. NIST standardised the first of them in 2024; India's National Quantum Mission includes migration as a pillar.

Open a tab to https://www.sbi.co.in. Look at the little padlock icon next to the URL. Behind that padlock is a short handshake in which your browser and the State Bank of India server agree on a shared secret key, using only messages that any eavesdropper on the network can read. The trick that makes this possible — the thing that lets two strangers negotiate a secret in full public view — is public-key cryptography, and the most widely deployed version of it is RSA.

RSA is 47 years old. It is in your browser, in your UPI transactions, in the digital signatures on Aadhaar documents, in the PGP-signed firmware updates for your router, in the HTTPS certificate for every e-commerce site from Flipkart to the income tax portal. And its entire security rests on a single mathematical assumption: factoring a large integer into its prime factors is hard.

For forty-seven years, that assumption has held up. Every known classical algorithm for factoring takes sub-exponential time, which sounds fancy but in practice means "impractical at the key sizes people use." Then in 1994, a 35-year-old mathematician named Peter Shor walked into a Bell Labs seminar room and presented a quantum algorithm that factored integers in polynomial time. The algorithm does not yet run at scale — the quantum computers to execute it do not exist yet — but the mathematics is settled. On a large enough fault-tolerant quantum computer, RSA is dead.

This chapter is the setup for the next three, which build Shor's algorithm step by step. Here you see why factoring matters, how RSA uses it, what the classical cost is, and how far today's quantum hardware is from breaking it. The next chapter (ch. 75) shows the purely classical reduction from factoring to order-finding. The chapters after that (76, 77) show how quantum mechanics cracks the order-finding problem in polynomial time.

The puzzle that makes RSA work

Two operations sit at the heart of every cryptosystem: one that is easy, and one that is hard. For RSA, those are multiplication and factoring.

Multiplying two 1024-bit primes together takes a laptop microseconds. Going the other way — given just the 2048-bit product, recovering the two primes that made it — takes the largest supercomputer on Earth decades. This asymmetry is what is called a trapdoor function: easy forward, hard backward, unless you hold a secret piece of information that unlocks the backward direction.

You can sense the asymmetry with a small example. Multiply 101 \times 103 in your head; you probably get 10403 in a few seconds. Now do it backwards: somebody gives you 10403 and asks for its two prime factors. You would try dividing by 2, 3, 5, 7, 11, \ldots until something clicked — and even at this trivial scale, that is tedious. Now scale: instead of 10403, imagine the other person hands you a 617-digit number (that is what 2048 bits looks like in decimal). The same "try one prime at a time" strategy has to check about 2^{1024} primes, which is more atoms than the observable universe contains. That is the wall RSA hides behind.

Multiplication is easy, factoring is hardTwo arrows between a pair of primes p and q on the left and a composite N equals pq on the right. The forward arrow from p, q to N is labelled easy: multiplication. The backward arrow from N back to p, q is labelled hard: factoring, with a smaller label showing GNFS sub-exponential cost.two large primesp, q≈ 1024 bits eachpublic modulusN = p q≈ 2048 bitseasy: multiplymicroseconds on a laptophard: factorGNFS: sub-exponential, decades at 2048 bits
The trapdoor asymmetry. Multiplying two 1024-bit primes is a microsecond-scale operation. Factoring their 2048-bit product back into the same two primes is, as of 2026, a decades-long supercomputer project. RSA lives inside that gap.

RSA uses this gap to let a server publish a number anyone can encrypt with, while keeping a related number secret that only the server can decrypt with.

How RSA actually works

You do not need the full security proof to understand what RSA is doing. Five lines of arithmetic, done once at the server, set up the whole system.

Setup. You are a server preparing to receive encrypted messages.

  1. Pick two large primes p and q. (In practice, each is 1024 bits, so p, q are each roughly 10^{308}.)
  2. Compute N = pq. This is your public modulus.
  3. Compute \varphi(N) = (p-1)(q-1), the Euler totient of N. Why this particular number: \varphi(N) counts integers in \{1, 2, \ldots, N\} that share no common factor with N. The next step uses a classical theorem (Fermat–Euler) that says a^{\varphi(N)} \equiv 1 \pmod N for any a coprime to N — and this is exactly what makes encryption reversible.
  4. Pick e coprime to \varphi(N). A common choice is e = 65537. This is the public exponent.
  5. Compute d such that e \cdot d \equiv 1 \pmod{\varphi(N)}. This d is the private exponent.

Publish. The public key is the pair (N, e). Anyone can see it. Put it on a web page.

Keep secret. The private key is d (or equivalently, p and q, from which d can be recomputed). Keep it on your server. Never send it over the network.

Encrypt. To send you a message m (an integer between 0 and N-1), anyone computes

c = m^e \bmod N

and sends you c.

Decrypt. You compute

m = c^d \bmod N

which, by the magic of the e and d exponents cancelling modulo \varphi(N), gives back exactly the original m.

The two modular exponentiations — m^e and c^d — can both be computed efficiently using repeated squaring, even for 2048-bit numbers. A laptop does them in milliseconds.

Where the security lives. The public key publishes N and e. If an eavesdropper could compute d from (N, e), they could decrypt any message. But computing d requires knowing \varphi(N) = (p-1)(q-1), which requires knowing p and q — which requires factoring N. That is the whole security argument, in one line. Break factoring and you break RSA. Nothing else in the scheme matters.

RSA encryption and decryption flowA flow diagram showing a sender computing ciphertext c equals m to the e mod N using the public key N comma e, sending c over a channel, and a receiver computing m equals c to the d mod N using the private key d to recover the message. The eavesdropper in the middle sees only c and the public key.Sender (browser)message muses (N, e)c = m^e mod NReceiver (server)holds (p, q, d)m = c^d mod Nrecovers mciphertext canyone on the network sees thisEavesdroppersees c, N, eneeds to factor N
The RSA flow. The sender has only the public $(N, e)$ and encrypts $m$ into $c = m^e \bmod N$. The receiver uses the secret $d$ (derived from the secret primes $p, q$) to recover $m$. An eavesdropper sees $c$, $N$, and $e$ — everything except the primes — and would have to factor $N$ to get $d$.

The classical cost of factoring

How hard is factoring, really? The honest answer is "sub-exponential, but not by as much as you might hope."

Write N in binary — it takes n bits, where n = \log_2 N. For 2048-bit RSA, n = 2048. The cost of the best-known classical factoring algorithms grows with n in four broad tiers.

Trial division. Divide N by every integer from 2 up to \sqrt N, checking for zero remainder. Cost: O(\sqrt N) = O(2^{n/2}). At n = 2048, that is 2^{1024} operations — beyond any possible computer, ever.

Pollard's rho method. A clever random-walk trick, discovered in 1975. Cost: O(N^{1/4}) = O(2^{n/4}). Still exponential in n. At n = 2048, 2^{512} operations — still impossible.

Quadratic sieve. A 1981 idea: look for pairs of numbers whose squares are congruent modulo N, extract a factor from their difference. Cost: \exp\!\bigl(O(\sqrt{n \log n})\bigr). Sub-exponential — meaning the exponent grows slower than n — but still much worse than polynomial. Practical up to maybe 400-bit N.

General Number Field Sieve (GNFS). The current champion, refined through the 1990s and 2000s. Cost:

L_N\!\left[\tfrac{1}{3}, \left(\tfrac{64}{9}\right)^{1/3}\right] \;=\; \exp\!\Bigl(\bigl(\tfrac{64}{9}\bigr)^{1/3} \cdot n^{1/3} \cdot (\log n)^{2/3}\Bigr).

Why this notation: the L_N[\alpha, c] form is the standard way to write sub-exponential complexities. When \alpha = 1 you recover exponential; when \alpha = 0 you recover polynomial. GNFS sits at \alpha = 1/3, closer to polynomial than to exponential but still not polynomial. No classical algorithm is known with \alpha < 1/3.

Plug in n = 2048 and GNFS comes out at roughly 2^{112} operations. For context, the entire Bitcoin network performs around 2^{90} hashes per year. 2^{112} is about four thousand times that, sustained for a year — nothing any adversary on the planet can afford today, and nothing that is getting cheap soon.

Key size n (bits) GNFS cost (operations) What it means in 2026
512 \sim 2^{60} broken in 1999; a hobbyist can do this in a day
768 \sim 2^{80} broken in 2009 by academic consortium
1024 \sim 2^{90} borderline; large nation-states could do it
2048 \sim 2^{112} secure against all classical attackers today
4096 \sim 2^{140} overkill for most purposes

Why the jumps look like \sim 20 bits of work per doubling of key size: GNFS cost scales as \exp(c \cdot n^{1/3} \cdot (\log n)^{2/3}), so doubling n multiplies the cost exponent by roughly 2^{1/3} \approx 1.26. That is why going from 1024 to 2048 bits adds only about 22 bits of work to the attacker — it buys you security margin but not infinity.

Two lessons fall out. First, RSA-2048 is secure against every classical attacker you can imagine — governments, intelligence agencies, organised crime, well-funded academics. That is the foundation on which banking, e-commerce, Aadhaar and UPI sit. Second, RSA-2048's margin is not arbitrarily large: GNFS's cost grows as 2^{n^{1/3}} rather than 2^n, so if tomorrow someone found a classical algorithm with a 2^{n^{1/4}} scaling, the security of 2048-bit keys would evaporate overnight. The assumption that no such algorithm exists is a belief, supported by forty years of unsuccessful attack but never proved.

Factoring cost versus key size, log scaleA log-scale plot showing the cost of three factoring algorithms against RSA key size n in bits. Trial division grows as 2 to the n over 2, Pollard rho as 2 to the n over 4, and GNFS as a sub-exponential curve that is much slower growing but still climbs steeply. Key sizes 512, 1024, 2048 are marked with vertical dotted lines. A horizontal band marks the present-day feasible zone below 2 to the 90 operations.2^10242^5122^1282^802^40cost (operations)key size n (bits)25651210242048trial div: 2^(n/2)Pollard rho: 2^(n/4)GNFS (best known)RSA-2048feasible today (≲ 2^80)
The cost of the three main classical factoring algorithms as a function of key size (log-scale vertical axis). Trial division and Pollard's rho are exponential in $n$. GNFS is sub-exponential — much slower growing — but still far from polynomial. At RSA-2048, GNFS sits at $\sim 2^{112}$, comfortably above what any classical adversary can afford.

A toy worked example — RSA with N = 15

RSA is not an abstract protocol. You can run it with your head and a pen. The key sizes are unrealistically tiny, but every arithmetic step is identical to the real thing.

Example 1: encrypt and decrypt with N = 15

Setup. Pick two primes: p = 3, q = 5. Then N = pq = 15. The totient is \varphi(N) = (p-1)(q-1) = 2 \times 4 = 8.

Step 1. Choose e coprime to \varphi(N) = 8. The smallest valid e is 3 (since \gcd(3, 8) = 1). Why e = 3: we need \gcd(e, \varphi(N)) = 1 so that e has a multiplicative inverse modulo \varphi(N). Coprime to 8 means e is odd and not divisible by any factor of 8; e = 3 is the smallest such option.

Step 2. Compute d with e \cdot d \equiv 1 \pmod{8}. Try d = 3: 3 \times 3 = 9 \equiv 1 \pmod 8. So d = 3. Why it happened that d = e here: a coincidence of tiny numbers — with e = 3 and \varphi(N) = 8, 3 \times 3 = 9 = 8 + 1. In real RSA, d and e are unrelated.

Step 3. Pick a message m. Let m = 4. (Any integer in \{0, 1, \ldots, 14\} works; bigger messages get split into pieces.)

Step 4. Encrypt.

c = m^e \bmod N = 4^3 \bmod 15 = 64 \bmod 15 = 4.

Why the ciphertext happens to equal the message: 64 = 4 \times 15 + 4, so 64 \bmod 15 = 4. A coincidence at these tiny numbers. At real key sizes, c and m are unrelated-looking 2048-bit integers.

Step 5. Decrypt.

m = c^d \bmod N = 4^3 \bmod 15 = 4.

Recovered.

Result. The ciphertext c = 4 encrypts the message m = 4; raising to the private exponent d = 3 recovers m. The arithmetic is identical to what SBI's server does, only with 600-digit numbers in place of these one-digit ones.

Toy RSA with N=15: encrypt and decrypt m=4A three-step diagram. Box one shows the setup p=3, q=5, N=15, phi=8, e=3, d=3. Arrow to box two showing encrypt c = 4^3 mod 15 = 4. Arrow to box three showing decrypt m = 4^3 mod 15 = 4. Recovered.Setupp=3, q=5 → N=15φ(N) = 8e=3, d=3(3·3=9≡1 mod 8)Encryptm = 4c = 4³ mod 15= 64 mod 15 = 4Decryptc = 4m = 4³ mod 15= 4 ✓the whole of RSA, in three boxes
RSA with $N = 15$, the smallest non-trivial example. Same arithmetic, smaller numbers. An attacker sees $N = 15$ and has to factor it back to $3 \times 5$ to compute $d$; at this size they can, by inspection. At 2048 bits, they can't.

What this shows. Every piece of RSA — setup, encryption, decryption — is elementary modular arithmetic. Nothing mystical. The security comes entirely from "factoring N is hard when N is big." Break factoring and every step of the decryption becomes reproducible by the attacker.

What the quantum threat actually looks like

So Shor's algorithm factors in polynomial time. Is RSA broken?

Not today. The honest picture needs two numbers.

What quantum computers can factor today (April 2026): small numbers with extra structure, using demonstrations rather than serious Shor implementations. The largest number Shor's algorithm has been used to factor without compiling shortcuts is 21 = 3 \times 7, demonstrated on trapped-ion and superconducting hardware around 2012. Larger demonstrations (numbers up to 10^{15}) have used heavily-optimised "compilation" that hard-codes parts of the answer into the circuit — impressive engineering but not a genuine factoring algorithm.

What quantum computers need to factor RSA-2048: estimates vary with error-correction scheme, but the widely-cited 2021 analysis of Gidney and Ekerå [2] puts it at roughly 20 million physical qubits running for about 8 hours, assuming surface-code error correction at reasonable gate-error rates. A 2019 Google/Delft analysis gave a similar ballpark.

What quantum computers have today (2026): IBM's Condor (2023) reached 1121 physical qubits. IBM's Heron (2024) had 156 qubits but with far better fidelity. Google's Willow (2024) had 105 qubits with unprecedented error rates. The leaders all have circuit depths and error rates that remain far short of what Shor at 2048 bits would need.

The gap between 1121 physical qubits (today) and 20 million (needed) is about four orders of magnitude. At historical rates of qubit-count doubling (~1.5 years), that is roughly 20 years. At optimistic projections, 10 years. At pessimistic projections, never, because error-correction overhead scales badly and the physics gets harder as systems get larger. The NIST-standardised cryptographers' working estimate, calibrated against all public data, is that RSA-2048 becomes vulnerable somewhere between 2035 and 2045 — if all goes well for the quantum engineers.

Hype check. You will read press articles claiming quantum computers will "break RSA this year" or that banks are "weeks away from collapse." They are selling ad impressions. The timeline above is the honest one. The qubit counts you see today are physical, mostly noisy, only weakly error-corrected, and not remotely close to what Shor at 2048 bits requires. What the threat does require, urgently, is migration to post-quantum cryptography — because encrypted data intercepted today can be stored and decrypted later, the day quantum Shor comes online ("harvest now, decrypt later"). The policy case for migration is solid; the apocalyptic timeline is not.

Indian stakes — Aadhaar, UPI, banking

The Unique Identification Authority of India uses RSA-2048 signatures for every Aadhaar authentication. Every time you authenticate at a ration shop, a bank, a mobile shop — that transaction carries an RSA-signed payload from UIDAI servers. The biometric templates themselves are encrypted in transit with the same family of primitives.

UPI — which processed over 17 billion transactions in March 2026 — signs transaction messages with RSA or ECC (Elliptic Curve Cryptography, which Shor also breaks). Every Google Pay, PhonePe, BHIM tap relies on this.

The Reserve Bank of India's RTGS and NEFT settlement systems use RSA-signed certificates for inter-bank communication. The National Stock Exchange uses RSA for member authentication.

A working Shor's-algorithm-capable quantum computer would force India to migrate all of these to post-quantum primitives. The migration is a multi-year project across tens of thousands of endpoints. The National Quantum Mission (2023, ₹6003 crore over 8 years) includes a dedicated pillar on post-quantum cryptography, co-ordinating between UIDAI, NPCI, SEBI and RBI. The Indian standards body, the Bureau of Indian Standards, is working with NIST on aligning the Indian PQC recommendations with the international standards.

Example 2: how long until RSA-2048 falls?

Setup. You want to estimate when a quantum computer could plausibly factor a 2048-bit RSA modulus. Start from today's hardware and project forward using resource estimates for Shor at 2048 bits.

Step 1. What is needed. Gidney and Ekerå (2021) estimate the requirements as roughly:

\begin{aligned} \text{physical qubits} &\approx 2 \times 10^7 \;\; (20 \text{ million}) \\ \text{circuit depth} &\approx 8 \text{ hours at } 1 \mu s \text{ gate time} \\ \text{physical error rate per gate} &\lesssim 10^{-3}.\end{aligned}

Why 20 million when Shor "only" needs a few thousand logical qubits: error correction is expensive. A single logical qubit in the surface code needs roughly 2d^2 physical qubits at code distance d, where d must grow with circuit depth to keep the overall error low. For Shor-2048's ~10^{11} T-gate count, d \approx 25, giving ~10^3 physical qubits per logical qubit. Multiply by ~2 \times 10^4 logical qubits and you get \sim 2 \times 10^7.

Step 2. What you have. April 2026: IBM Condor 1121 physical qubits (2023); IBM Heron 156 higher-fidelity qubits (2024); Google Willow 105 qubits with best-in-class error rates (2024). Physical error rates around 10^{-3} for the best two-qubit gates, improving slowly.

Step 3. The gap.

\frac{2 \times 10^7}{10^3} = 2 \times 10^4 \text{× gap in qubit count.}

At historical doubling roughly every 18 months (Neven's "double-exponential" claim notwithstanding), 2 \times 10^4 is about 14 doublings, or 21 years.

Step 4. Not just qubit count — connectivity, gate fidelity, magic state distillation, and wall-clock gate speed all need to scale together. The most pessimistic credible estimates put RSA-2048 at "not before 2045." The most optimistic credible estimates put it at "2035 is possible." The NIST working consensus is "plan for migration to complete before 2035 to be safe."

Result. Somewhere between ten and twenty years. Near enough that policy must act now (because the migration itself takes a decade and data harvested today is vulnerable later); far enough that Aadhaar's current cryptography is not in imminent danger.

Physical qubits: what we have versus what Shor needsA log-scale bar chart showing six bars. IBM Heron 156, IBM Condor 1121, projected 2028 10k, projected 2032 100k, projected 2038 million, and the Shor-2048 target of 20 million. The last bar is marked with accent red.Physical qubits: state of the art vs Shor-2048 target10^810^610^410^21Heron 1562024Condor 11212023~10k proj.2028~100k proj.2032~1M proj.203820M targetShor-2048Shor-2048 physical qubit line
Log-scale comparison of today's physical qubit counts with what Shor-2048 needs. The gap is about four orders of magnitude. Closing it plausibly takes 10–20 years, which is why post-quantum cryptography migration is already underway even though the threat is a decade away.

What this shows. RSA-2048 is not in immediate danger, but it is not in comfortable safety either. The migration to post-quantum cryptography is a now problem, not a later problem — because the migration itself takes a decade, and data stolen today can be decrypted when Shor arrives. The threat is real, specific, and on a calendar.

Post-quantum cryptography — what will replace RSA

Shor's algorithm breaks RSA, ECC, and Diffie-Hellman — all the public-key primitives whose security rests on factoring or discrete logarithm. It does not break cryptosystems whose security rests on other hard problems. A whole research programme, post-quantum cryptography, has been building alternatives since the early 2000s.

The main families:

India's National Quantum Mission cryptography working group has committed to adopting NIST's standardised lattice schemes as the migration target. The timeline laid out in the Digital India documents aims for pilot PQC deployment in Aadhaar and UPI by 2028, with full migration by 2035. That is a big engineering project — every TLS server, every signed Aadhaar payload, every UPI app needs to be re-issued with new primitives, and rollback compatibility has to be maintained for years. The earlier you start, the more manageable it is.

Common confusions

Going deeper

If you understand that RSA's security rests on factoring, that the best classical factoring algorithm is sub-exponential GNFS, that Shor's algorithm breaks this in polynomial time on a fault-tolerant quantum computer, and that the threat is real but a decade away — you have chapter 74. The next chapters build Shor itself. The material below is for readers who want the sharper version: the exact GNFS scaling, lattice-based PQC internals, harvest-now-decrypt-later, and the NIST standardisation process.

The General Number Field Sieve in one paragraph

GNFS works in three phases. Relation collection: search for pairs (a, b) such that a + b\alpha (in a number field defined by a carefully chosen polynomial) and a + b m (in the integers, for a specific m) both factor into small primes — "smooth" numbers. Linear algebra: assemble the smooth factorisations into a huge sparse matrix over \mathbb{F}_2 and find a null vector, which corresponds to a congruence X^2 \equiv Y^2 \pmod N. Square-root and gcd: compute \gcd(X - Y, N), which is a non-trivial factor of N with high probability. The bottleneck is relation collection, which is heavily parallelisable (factors of RSA-768 were collected on a distributed network), and the linear algebra, which needs many terabytes of RAM. No known improvement to GNFS has beaten the L_N[1/3, (64/9)^{1/3}] complexity in over thirty years of research.

Lattice-based cryptography — why Shor can't break it

Lattice cryptography is built on the Learning With Errors (LWE) problem. Given a matrix A and a vector b = As + e where s is a short secret and e is a small noise vector, recover s. The classical difficulty comes from the noise: without noise, this is a linear system you can solve in polynomial time by Gaussian elimination; with noise, the best algorithms take 2^{O(n)} time where n is the lattice dimension.

Shor's algorithm exploits the periodic structure of modular exponentiation — specifically that a^{k+r} \equiv a^k \pmod N. Lattices have no such clean periodicity; the shortest-vector problem and learning-with-errors have rich algebraic structure but not a single global period that a quantum Fourier transform could pick out. Known quantum algorithms for lattice problems give at most polynomial speedup, not exponential — which is why lattice-based schemes are the presumed replacement for RSA.

The standardised scheme CRYSTALS-Kyber uses Module-LWE, a variant of LWE over polynomial rings, with recommended parameters at n = 256 and module dimension 3. At these parameters, the security level is estimated at 256 bits of quantum security — meaning even a full-scale quantum computer needs 2^{256} operations to break it.

Harvest now, decrypt later

The most immediate reason PQC migration cannot wait is harvest-now-decrypt-later. Adversaries with network access — state-level intelligence agencies, primarily — can intercept encrypted traffic today, store it, and decrypt it when a Shor-capable quantum computer comes online. For data with long-term value — Aadhaar biometrics, medical records, state secrets, patent-pending research — the timeline that matters is not "when does Shor exist" but "when does the data become worthless." If your data is still sensitive in 2040, encrypting it with RSA today is equivalent to posting it on a bulletin board that becomes readable in 2040.

This is why the US Office of Management and Budget issued a 2022 memorandum requiring all federal systems to migrate to PQC by 2035, and why India's NQM cryptography working group is pushing similar timelines. The NSA's 2022 "Commercial National Security Algorithm Suite 2.0" already requires PQC transition for classified systems.

NIST's post-quantum standardisation — 2016 to 2024

In December 2016, NIST launched a public competition for post-quantum cryptographic standards. Over eight years and four rounds, 69 submitted candidates were winnowed down. In 2022, NIST selected CRYSTALS-Kyber (key encapsulation), CRYSTALS-Dilithium (signatures), FALCON (signatures), and SPHINCS+ (hash-based signatures) as the initial winners. The formal standards were published in August 2024 as FIPS 203 (Kyber → ML-KEM), FIPS 204 (Dilithium → ML-DSA), and FIPS 205 (SPHINCS+ → SLH-DSA).

In July 2022, during round 4, one of the alternate candidates — SIKE, an isogeny-based scheme — was broken classically in under an hour on a laptop by Wouter Castryck and Thomas Decru. This was a cautionary tale: "post-quantum" does not guarantee "secure against all classical attacks," and the standardisation process includes continuous review.

India's National Quantum Mission and PQC

The NQM was approved in April 2023 with a budget of ₹6003 crore over 2023–2031. Its four Thematic Hubs cover quantum computing, quantum communication, quantum sensing, and quantum materials. Within communication, a dedicated PQC working group is tasked with:

The leading Indian academic centres — IIT Madras, IISc Bangalore, IIT Delhi, IIT Bombay, TIFR — have active cryptography groups contributing to both attack-side (lattice reduction) and defence-side (new scheme proposals) research. The Indian standards body (BIS) is tracking NIST alignment and preparing IS:xxx:202x standards for PQC primitives in Indian systems.

Where this leads next

References

  1. Rivest, Shamir, and Adleman, A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (1978) — the original RSA paper. Summary at Wikipedia.
  2. Craig Gidney and Martin Ekerå, How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits (2021) — arXiv:1905.09749.
  3. Wikipedia, General number field sieve.
  4. NIST Post-Quantum Cryptography Standardization — csrc.nist.gov/projects/post-quantum-cryptography.
  5. Michele Mosca, Cybersecurity in an era with quantum computers: will we be ready? (2015) — arXiv:1512.00039.
  6. John Preskill, Lecture Notes on Quantum Computation, Chapter 6 — factoring and Shor's algorithm — theory.caltech.edu/~preskill/ph229.