In short

A quantum computer is a machine that stores information in qubits — little physical systems (single atoms, tiny superconducting loops, single photons) that follow the rules of quantum mechanics. A qubit is not "0 and 1 at the same time." It carries two complex numbers called amplitudes, one for outcome 0 and one for outcome 1, and when you measure it you get exactly one bit. The reason this is useful is not that a quantum computer "tries every answer at once" — it is that amplitudes can interfere, adding up for the right answer and cancelling for the wrong ones, so the right answer is the one most likely to appear when you measure. Only a handful of problems benefit from this: factoring large numbers, simulating molecules and materials, searching an unstructured list, and a few cousins. For almost everything else — WhatsApp, Instagram, Excel, your Class 11 physics homework — a quantum computer is either no better than a laptop, or worse.

Suppose you are handed a number. Not a small one — a 617-digit number. You are told it is the product of exactly two prime numbers, and you are asked to find them.

A laptop cannot do this. Neither can a supercomputer. The best classical algorithm known would take longer than the age of the universe. The security of almost every online transaction you have ever made — UPI, net banking, an HTTPS page, the Aadhaar portal — rests on this exact fact. Those 617 digits are what a 2048-bit RSA key looks like, and the assumption that nobody can factor it is the wall that stops a stranger from reading your bank messages.

Peter Shor showed in 1994 [1] that a quantum computer, if one could be built, would tear that wall down in hours. Not by being "faster" in the ordinary sense — not by running through the candidates one by one at a billion guesses a second. By something else. Something that does not happen in any classical machine. The goal of this first chapter is to name what that something is, honestly, without the two lies you have probably already been told by pop-science articles.

This is chapter 1 of a 211-chapter track. By the end of it you will be able to read Shor's algorithm line by line and see why it works. For now, the job is smaller: tell you what a quantum computer actually is, tell you what it actually does, and tell you — carefully — what it does not do.

What a qubit is, in one honest paragraph

A classical bit is a switch. It is one of two things: a 0 or a 1. The switch lives inside a transistor, and at any moment, if you look at it, you see either a high voltage (call it 1) or a low voltage (call it 0). That's it. A byte is eight such switches. Your laptop has about ten billion of them and spends its life flipping them.

A qubit is different. A qubit is a physical system — say, a single electron's spin, or a single atom trapped in laser light, or a tiny loop of superconducting wire cooled to near absolute zero — that is governed by quantum mechanics rather than by the voltage-threshold rules of a transistor. In quantum mechanics, before you look at the qubit, it is not in one of two definite states. It carries two complex numbers, one attached to outcome 0 and one attached to outcome 1. These two numbers are called amplitudes. When you actually measure the qubit, you get either 0 or 1 (one classical bit, no more), and the probability of each outcome is the square of the magnitude of its amplitude. The amplitudes themselves are never directly observed — they are the bookkeeping that quantum mechanics uses between your operations and your measurement.

Hype check. You have probably read that a qubit is "0 and 1 at the same time." That sentence sounds profound and says almost nothing useful. What is actually happening is that the qubit is in a superposition — a specific weighted combination — of the 0 and the 1 possibility, with two complex amplitudes attached. The two numbers matter. Their phases matter. Their ability to add and cancel is the entire story. "0 and 1 at the same time" throws that story away and leaves you with a slogan.

Two complex numbers sound small. Two complex numbers are small. The magic starts when you put qubits together. Two qubits carry four complex amplitudes — one for each of the four outcomes 00, 01, 10, 11. Three qubits carry eight. Ten qubits carry 1024. Fifty qubits carry about a quadrillion. Three hundred qubits carry more amplitudes than there are atoms in the observable universe. No classical computer can track that many numbers. A quantum computer does not "track" them — it is the physical system that carries them, by being a quantum system itself.

A classical bit versus a qubitOn the left, a simple two-position switch showing 0 and 1 with a pointer on the 1 side. On the right, a Bloch sphere with labeled poles |0⟩ at the top and |1⟩ at the bottom, an equator, x and y labels, and a state vector tilted off the vertical by angle theta with azimuth angle phi marked on the equator.Classical bit01one of two fixed valuesQubitθφ|0⟩|1⟩xya point on a sphere (two angles)
A classical bit is a switch with two positions. A qubit's state lives on the surface of a sphere — a continuum of possibilities parametrised by two angles.

Why a sphere and not a line: a classical bit has one of two values, so its "state space" is two isolated points. A qubit has two complex amplitudes subject to one normalisation constraint and one unobservable phase, which leaves exactly two real numbers — latitude and longitude on a sphere. The sphere is called the Bloch sphere, and the whole of chapter 14 is dedicated to it.

The three things quantum computers are genuinely good at

Quantum computers are not general-purpose speed-up machines. They are good at a small, specific set of problems. After three decades of research, almost every proven quantum speedup falls into one of three families. Knowing the names of these families lets you see the difference between honest QC reporting and marketing slides.

1. Factoring and discrete logarithms — the Shor family. Take a large integer N that is the product of two primes. Classically, factoring it takes sub-exponential time — you can double the key length and make the factoring take a thousand times longer. Shor's algorithm runs in time polynomial in the number of digits of N. You double the key length; Shor's algorithm takes roughly eight times longer, not a thousand times. This is the exponential-to-polynomial speedup that broke the world's attention in 1994. The real-world stakes: every public-key cryptosystem deployed today (RSA, Diffie–Hellman, elliptic-curve cryptography) is built on the assumption that factoring or its cousin, the discrete logarithm, is hard. A useful Shor-running quantum computer would retire all of them. The world knows this is coming and has spent a decade standardising post-quantum cryptography to replace them.

2. Simulating quantum systems — the Feynman family. This is the application Richard Feynman pointed at in 1982, and it is the one most likely to produce the first commercially valuable quantum computer. The problem is modelling the behaviour of molecules and materials. Ask a simple-sounding question: what does the caffeine molecule do when it hits a receptor in your brain? The caffeine molecule has 24 atoms, and to simulate its electrons quantum-mechanically you would need to track around 2^{160} complex amplitudes — more than every atom in the visible universe squared, and then squared again. Classical approximations (density functional theory, coupled cluster) get close enough for many purposes but break down for the problems that matter most: nitrogen fixation catalysts (the chemistry of fertilizer, 2% of world energy), room-temperature superconductors, new battery electrodes, drug molecules with stubborn strong electron correlation. A quantum computer can simulate a quantum system using a quantum system — no exponential blow-up. Chapter 2 will build this argument carefully; it is the cleanest reason the field exists.

3. Unstructured search and its cousins — the Grover family. Suppose you have a list of N items and you want to find the one that satisfies some property. Classically, with no structure to exploit, you have to check each item one at a time, taking O(N) checks on average. Grover's algorithm does it in roughly O(\sqrt{N}). A list of a million items: 1,000 quantum steps instead of 500,000 classical ones. This is a quadratic speedup, not exponential — the asymptotic gap narrows as the problem grows, and the constant-factor overhead of quantum hardware eats much of it in practice. Still, Grover-type speedups show up everywhere: searching databases, brute-forcing symmetric cryptographic keys, accelerating certain optimisation and machine learning sub-routines. It is the general-purpose quantum speedup, smaller than Shor's but much more broadly applicable.

The three application familiesThree boxes side by side, each naming one quantum-computing family. Left box: Factoring — Shor 1994 — breaks RSA. Middle box: Simulation — Feynman 1982 — chemistry and materials. Right box: Search — Grover 1996 — quadratic speedup.FactoringShor, 1994integer → primespoly-time algorithmbreaks RSA, ECCexponential speedupSimulationFeynman, 1982molecules, materialscatalysts, drugssuperconductorsexponential speedupSearchGrover, 1996find-an-itemO(√N) vs O(N)broadly usefulquadratic speedupAlmost every known quantum advantage lives in one of these three families.
The three families of quantum advantage. Shor gives the headlines, Feynman gives the likely first practical use, Grover gives the broad quadratic speedup. Other known speedups (HHL linear systems, quantum walks, certain sampling problems) are cousins of these three.

Everything else in the algorithmic zoo — quantum walks, quantum linear systems, amplitude estimation, variational quantum eigensolvers — is a refinement, a combination, or a practical engineering version of one of these three ideas. You will meet them all in Parts 7 through 16 of this track.

The secret ingredient is not speed, it is interference

Now the claim people actually get wrong. The popular explanation says a quantum computer "tries every answer in parallel" and then "reads out the right one." This is almost exactly backwards.

Here is what actually happens. At the start of a quantum algorithm, you put your qubits into a superposition — an equal mixture where every possible input string has a non-zero amplitude. Then you apply a sequence of quantum operations (called gates) that modify those amplitudes. Then you measure. The catch: you only get one classical string out of the measurement. One answer, not all of them. Whatever amplitude was attached to that string when you measured — its magnitude squared is the probability of seeing it.

So how does the right answer show up? The amplitudes behave like waves. Waves have a sign, or more generally a phase. Two waves meeting in step reinforce each other — constructive interference. Two waves meeting out of step cancel — destructive interference. A good quantum algorithm is engineered so that the amplitudes for the wrong answers cancel out, and the amplitudes for the right answer add up. By the time you measure, almost all the probability is parked on the correct string. You see it on the first try (or, with a few repeats for reliability).

This is why quantum does not beat classical on arbitrary problems. If there is no way to arrange the interference pattern — no clever gate sequence that makes the wrong answers cancel — the superposition is just an expensive way to flip coins. "Trying every answer at once" is literally what a random guess does; it is not the quantum advantage. The advantage is the interference, and setting up the right interference pattern is where twenty years of algorithmic research has gone.

Constructive and destructive interference of amplitudesTwo diagrams. Top: two wave arrows pointing in the same direction adding to a taller arrow — marked amplitude for the right answer grows. Bottom: two wave arrows pointing in opposite directions summing to near zero — marked amplitude for the wrong answer is cancelled.Constructive interference (right answer)amplitude path Aamplitude path BA + B, largerpaths arrive in phase — amplitudes addDestructive interference (wrong answer)amplitude path Aamplitude path B (opposite phase)A + B ≈ 0, cancelled
Quantum amplitudes are signed (more generally, complex). Two paths leading to the same outcome can add, or cancel. A good quantum algorithm makes the wrong answers cancel and the right answer grow.

Hype check. If you take one technical sentence away from this chapter, make it this: a quantum computer does not try every answer in parallel; it arranges the amplitudes so that the wrong answers cancel and the right answer is the one most likely to be measured. Every honest explanation of Shor's algorithm, Grover's algorithm, and quantum simulation comes down to this sentence.

An honest hype check — what quantum computers will not do

Quantum computing attracts more hype than any other field in contemporary computer science. Some of that hype is just excitement, which is fair — the physics is beautiful and the promise is real. A lot of it is marketing, stock-market positioning, and grant-proposal exaggeration. You need a shortlist of misconceptions to inoculate yourself against the worst of it.

Quantum computers will not break all encryption. They will break public-key cryptography that relies on factoring or discrete logarithm — RSA, Diffie–Hellman, and elliptic-curve based protocols (ECDSA, ECDH). They will not break symmetric cryptography (AES, ChaCha20) in any devastating way. Grover's algorithm gives a quadratic speedup against symmetric ciphers, which is equivalent to roughly halving the effective key length: AES-256 stays secure; AES-128 is weakened but still impractical to attack at scale. Modern hash functions (SHA-256, SHA-3) have the same story. The practical fix is already shipping: the post-quantum cryptographic standards (ML-KEM, ML-DSA, SLH-DSA) were finalised in 2024 and are rolling into browsers, operating systems, and national ID systems. India's UIDAI and NPCI are actively engaged in this migration — Aadhaar authentication and UPI signatures are, right now, among the cryptographic systems that have to be quantum-safe before useful Shor-running quantum computers arrive.

Quantum computers are not faster at everything. They are faster at the three families named above and their refinements. They are not faster at running operating systems, rendering video, sorting a shuffled deck of cards, or doing your Class 11 physics homework. A quantum computer running a program to compute Fibonacci numbers would be strictly slower than your phone. The model is: quantum computers are a rare, delicate, slow, room-filling co-processor that you call into once or twice during a larger classical computation to accelerate the step that needs it. They are not replacements for classical computers.

Quantum computers will not solve NP-complete problems in polynomial time. Travelling salesman, 3-SAT, vertex cover — the problems a computer-science student meets in an algorithms class as "hard" — are believed to remain hard for quantum computers too. Grover gives a quadratic speedup, which takes the running time from 2^n down to 2^{n/2}. That helps, but it is not the "polynomial-time NP solver" that some articles imply. The complexity class BQP (what quantum computers can efficiently solve) is believed to sit beside NP, not above it. You will meet this formally in Part 19.

Quantum computers will not cure cancer, solve climate change, or design conscious AI. These are vague framings that marketing teams reach for when they want a grand number on a slide. The specific contribution quantum computing could realistically make is to one part of some of these problems — for example, accurately simulating a catalyst that fixes nitrogen at room temperature, which would cut the energy cost of fertilizer globally. That is a real and major contribution. It is not "solving climate change" any more than a better spanner is "fixing civilization."

Today's quantum computers are not yet useful. The field is in what researchers call the NISQ era — Noisy Intermediate-Scale Quantum. Machines have around 100 to 1,000 physical qubits, but every operation has a small chance of producing an error, and the errors compound. To run Shor's algorithm on a 2048-bit RSA key requires roughly 20 million physical qubits with error correction — four orders of magnitude beyond what exists today. To simulate the catalyst that fixes nitrogen needs maybe a few million. Bridging this gap is an engineering problem, and it is the central question of the coming decade. Results from 2024 and 2025 — Google's Willow chip with error-suppression below the surface-code threshold, IBM's quantum-advantage sampling results, Harvard-MIT-QuEra neutral-atom machines reaching 48 logical qubits — are genuinely impressive milestones. None of them is a useful commercial quantum computer yet.

Hype check. The NISQ-to-fault-tolerant gap is the single most important fact in present-day quantum computing. An article claiming "quantum computers will break RSA in two years" has either not read a recent resource estimate or is selling something. An article claiming "quantum computers will never be built" is also wrong — the last decade has shown steady, order-of-magnitude-per-few-years progress. The honest statement is: it is going to happen, but it is going to take a decade or more, and the timeline is uncertain.

A concrete mini-example — factoring 15

Pop-science articles about quantum computing rarely show you what a quantum algorithm actually does with a specific input. You will see the full Shor's algorithm in Part 11. A flavour of it, with the tiniest possible number, works in a paragraph.

Example: factoring 15 with Shor's idea

You want the prime factors of N = 15. Classically you try divisors: 2 (no), 3 (yes) — and you are done. Fine for 15. For a 617-digit number the candidate divisors are more numerous than atoms in the universe.

Shor's algorithm does something different. It reduces factoring to finding a period. Pick a number a less than N and coprime to it — take a = 7. Compute successive powers of a modulo N:

7^1 = 7, \quad 7^2 = 49 = 4 \pmod{15}, \quad 7^3 = 28 = 13, \quad 7^4 = 91 = 1, \quad 7^5 = 7, \ldots

Why: the sequence 7, 4, 13, 1, 7, 4, 13, 1, … repeats with period r = 4. The period of this sequence is a purely number-theoretic fact about 15 — and once you know r, a short classical calculation turns it into the factors.

From a^r \equiv 1 \pmod{N} with r = 4, compute \gcd(a^{r/2} - 1, N) = \gcd(49 - 1, 15) = \gcd(48, 15) = 3. And \gcd(49 + 1, 15) = \gcd(50, 15) = 5. So 15 = 3 \times 5. Factored.

The quantum step is the period-finding. Finding r for a 617-digit N is the hard part classically — there is no shortcut. Shor's algorithm uses a quantum circuit that creates a superposition over all powers of a, applies a quantum Fourier transform, and produces a measurement whose distribution is sharply peaked at integer multiples of 1/r. One run gives you r with high probability. From r, a classical computer extracts the factors in a blink.

Period of 7^k mod 15A horizontal sequence of dots at integer positions k = 0 through 10. At each k, a small number labels the value 7^k mod 15, which is 1, 7, 4, 13, 1, 7, 4, 13, 1, 7, 4. The sequence 1,7,4,13 repeats every 4 positions, and an arc above spans the period.0123456789101741317413174period r = 4k — exponent in 7^k mod 15
The values of $7^k \bmod 15$ for $k = 0, 1, 2, \ldots$. The sequence returns to 1 after exactly four steps. That number 4 is the period, and a quantum computer can extract it in one shot.

What this shows. Shor's algorithm does not look at all the factors of 15 in parallel and pick the winner. It reformulates factoring as period-finding, and it uses the quantum Fourier transform to make the amplitudes concentrate on integer multiples of 1/r. The interference does the work. Every other piece of the algorithm is bookkeeping — classical bookkeeping, running on an ordinary laptop.

The fact that a proof-of-principle version of Shor's algorithm has factored 15 on real hardware (IBM, 2001, and many times since) is not the interesting part. The interesting part is that the same circuit structure, scaled up with enough qubits and low enough error rates, would factor 617-digit numbers.

The reader contract — what 211 chapters will give you

This is the opening of a 211-chapter track divided into 21 parts. Every chapter builds on the earlier ones and is a single idea, tightly scoped. Here is what the whole journey delivers.

By the end of Part 4 (chapter 34), you will be able to read any single-qubit operation as a rotation on the Bloch sphere, compute measurement probabilities in your head, and write down the matrix for any common single-qubit gate.

By the end of Part 6 (chapter 60), you will understand multi-qubit states, entanglement, and the CNOT gate. You will be able to follow a small quantum circuit on paper and compute what every qubit does at every time step.

By the end of Part 10 (chapter 130), you will have derived the Deutsch–Jozsa algorithm, Simon's algorithm, Grover's algorithm, and Shor's algorithm from first principles. You will know exactly where the interference comes from in each.

By the end of Part 15, you will understand the concrete architectures — superconducting, trapped-ion, neutral-atom, photonic — and know what a real quantum circuit looks like in Qiskit.

By the end of Part 19, you will understand quantum error correction (the surface code, in detail), quantum complexity theory (BQP, QMA), and the resource-estimate arguments that tell you how many physical qubits you need for a useful Shor run.

By the end of Part 21 (chapter 211), you will be equipped to read research papers in quantum algorithms, contribute to the open-source quantum software stack, or work on one of the experimental hardware efforts — and you will be able to read a marketing claim about quantum computing and decide, on the spot, whether it is honest.

None of this requires a physics degree. You will need comfort with complex numbers, matrix multiplication, and basic probability — all standard by Class 12 in India — and willingness to read carefully. Every chapter is a single evening's work.

Common confusions

Going deeper

If you have understood that a qubit carries amplitudes rather than a definite 0 or 1, that a quantum algorithm arranges interference rather than trying every answer, and that the current state of hardware is NISQ and not yet fault-tolerant — you have the core. What follows is context for readers who want to connect quantum computing to the broader intellectual history and the policy landscape.

The Church–Turing thesis and its quantum cousin

The Church–Turing thesis (1936) asserts that any function a reasonable physical machine can compute can also be computed by a Turing machine — the mathematical abstraction of a classical computer. For fifty years, this was informally read as "anything computable is computable by a classical algorithm, possibly slowly." The extended Church–Turing thesis strengthened this by claiming that any reasonable physical machine can be efficiently simulated by a classical Turing machine — meaning polynomial time in, polynomial time out.

Shor's 1994 algorithm is the first serious evidence that the extended Church–Turing thesis is false. A quantum computer can factor integers in polynomial time, and no classical polynomial-time factoring algorithm is known (with strong conjectural evidence that none exists). If this is correct, then the physical world admits a mode of efficient computation that classical mechanics does not capture. Quantum mechanics is not just a microscopic curiosity; it changes what "efficiently computable" means.

This is the deepest reason the field exists. It is not about gadgets or speedups. It is about the possibility that what nature can efficiently do is strictly more than what a classical computer can efficiently do.

Why 2^n amplitudes matter — a preview of Feynman's argument

Suppose you have n classical bits. The state of that system is one of 2^n strings — 00…0, 00…1, …, 11…1. Your laptop stores which string it is in using n bits of memory.

Now suppose you have n qubits. The state is a vector of 2^n complex amplitudes — one per classical string. If you want a classical computer to simulate n qubits, it has to store 2^n complex numbers and update them on every gate. For n = 40, that is a trillion complex numbers — just within reach of a supercomputer. For n = 50, a petabyte. For n = 60, memory exceeds the largest HPC systems. For n = 300, the storage requirement exceeds the number of atoms in the visible universe.

Feynman pointed at this fact in 1982 and said: if classical computers cannot efficiently simulate quantum systems, maybe quantum systems are fundamentally more powerful information processors. Build the machine out of quantum parts and this exponential wall vanishes — the n qubits carry the 2^n amplitudes by being the quantum system, not by storing the numbers one at a time. Chapter 2 is this argument in full.

What NISQ means and where the field stands in 2025

"NISQ" — Noisy Intermediate-Scale Quantum — is the term John Preskill introduced in 2018 to name the current era. "Intermediate-scale" means hundreds to low thousands of physical qubits. "Noisy" means every gate operation has an error rate, typically between 0.1% and 1%.

As of 2025, the state of the art is roughly:

Google's Willow chip (2024) was a landmark: it demonstrated that adding more physical qubits to a logical qubit decreases the logical error rate — proving that the surface code is on the right side of the error-correction threshold. That is the first time a real machine has crossed the threshold at which scaling up improves, rather than worsens, reliability.

What is still missing is scale. A useful fault-tolerant quantum computer likely needs millions of physical qubits, orders of magnitude beyond anything built today. The honest projection: gradual progress over the 2030s, first useful (non-decorative) quantum simulation results probably in the late 2020s to early 2030s, cryptographically relevant Shor's algorithm in the 2040s at the earliest.

India's National Quantum Mission

In 2023 the Union Cabinet approved the National Quantum Mission (NQM) with an allocation of ₹6003 crore over 2023–2031. The mission targets four verticals:

  1. Quantum computing — develop indigenous superconducting and photonic platforms, aiming for 50–1000 physical qubits by 2031.
  2. Quantum communication — satellite-based and fibre-based quantum key distribution, prototyping a secure national network.
  3. Quantum sensing and metrology — precision magnetometers, atomic clocks.
  4. Quantum materials and devices — new topological and superconducting materials, single-photon sources.

Four thematic hubs lead the work: IISc Bangalore (computing), IIT Madras (communication), IIT Bombay (sensing), IIT Delhi (materials). TIFR, the Raman Research Institute, and ISRO are also active — ISRO in particular has demonstrated satellite-to-ground quantum key distribution, a world-class capability.

India's historical position in quantum physics is central rather than peripheral. Satyendra Nath Bose's 1924 paper — sent as a letter to Einstein, who translated it into German and submitted it to Zeitschrift für Physik — gave the world Bose–Einstein statistics, which describe identical particles like photons and are half of the quantum classification of every particle in the universe. The word boson is named after him. C.V. Raman's 1928 observation of inelastic scattering of light by molecules was an early experimental signature of vibrational quantisation in chemistry — the Raman effect, India's first Nobel in the sciences. Meghnad Saha's 1920 ionisation equation applied quantum statistics to stellar atmospheres. The quantum revolution in the first half of the 20th century was not a purely European story, and Indian contributions sit at its core.

The Aadhaar–UPI post-quantum story

Indian-scale cryptography is a real policy case study. The UIDAI's Aadhaar system authenticates over a billion residents daily; each authentication is signed with RSA-2048 or equivalent. UPI transactions — more than 100 billion per year by 2024 — rely on ECDSA signatures. Both are vulnerable to a Shor-running quantum computer.

The fix is not a vague "someday" upgrade. The migration is already happening globally: NIST finalised the first post-quantum standards (ML-KEM for key exchange, ML-DSA and SLH-DSA for signatures) in August 2024. Browsers (Chrome, Firefox) and operating systems (Apple, Google, Microsoft) shipped hybrid classical-plus-post-quantum cryptography through 2024–2025. India's CERT-In and MeitY are tracking the same transition for national infrastructure; UIDAI has a research program on post-quantum Aadhaar. The real-world deadline is not when Shor's algorithm runs — it is earlier. An adversary can harvest encrypted traffic today and decrypt it later, once the quantum computer exists. Anything encrypted today that needs to stay secret in 2040 is already under threat.

That, more than any sci-fi demonstration, is why the world is scrambling.

Where this leads next

References

  1. Peter W. Shor, Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer (1994) — arXiv:quant-ph/9508027.
  2. John Preskill, Quantum Computing in the NISQ Era and Beyond (2018) — arXiv:1801.00862.
  3. John Preskill, Lecture Notes on Quantum Computationtheory.caltech.edu/~preskill/ph229.
  4. Wikipedia, Quantum computing.
  5. Nielsen and Chuang, Quantum Computation and Quantum InformationCambridge University Press.
  6. Government of India, National Quantum Mission — Department of Science and Technology.