In short
A logic gate is a circuit whose inputs and outputs are restricted to two voltage levels — nominally 0 V (logic 0, LOW) and 5 V (logic 1, HIGH) in classic TTL, or 0 V and 3.3 V / 1.8 V / 1.2 V in modern CMOS — and whose output is a Boolean function of the inputs. Seven standard gates cover the whole of combinational digital logic:
| Gate | Symbol expression | When output is 1 |
|---|---|---|
| NOT (inverter) | \bar{A} | input is 0 |
| AND | A \cdot B | both inputs are 1 |
| OR | A + B | either input is 1 |
| NAND | \overline{A \cdot B} | NOT AND — 0 only when both are 1 |
| NOR | \overline{A + B} | NOT OR — 1 only when both are 0 |
| XOR | A \oplus B | the inputs differ |
| XNOR | \overline{A \oplus B} | the inputs are equal |
The universality result is the single most important theorem of digital electronics: every Boolean function — and therefore every digital computer, including every processor etched at TSMC and the Intel India design centres in Bengaluru — can be built out of NAND gates alone. Or out of NOR gates alone. You do not need a menagerie of gates; you need one, replicated a billion times. From the smartphone in your pocket, to the signal-conditioning boards in ISRO's Aditya-L1 solar observatory, to the card-reader at every metro station in Delhi — the entire digital world is recursive NAND.
Open up any electronic device built in the last fifty years — the TV remote lying on the charpai, the Airtel set-top box under your television, the Bluetooth speaker playing filmi songs through a Bajaj amplifier — and somewhere inside it there is a black rectangle with spider legs, a chip, and inside that chip there are millions of things called logic gates. Each one is, physically, a handful of transistors of the kind the previous article described. Each one takes as input one or two voltages and produces one voltage at its output. The inputs and outputs are not analog waveforms — they are only ever either a clearly-LOW voltage (a few hundred millivolts) or a clearly-HIGH voltage (a few volts). The gate's job is to apply one of a small set of Boolean rules: "output is HIGH only if both inputs are HIGH", "output is LOW only if both inputs are HIGH", "output is the opposite of the input".
That is it. That is the entire idea. From these trivially-simple behaviours, wired together in the right pattern, you can build an adder. Wire adders together and you get a multiplier. Wire multipliers and adders and some memory elements and you get a CPU. Wire a CPU to more memory and some input and output devices and you get a smartphone. Every piece of every computer ever built — from the ENIAC in 1945 to the neural-network accelerators on a 2025 NVIDIA GPU designed by engineers at the NVIDIA India office in Pune — is a tall tower of these two-voltage, one-bit-at-a-time decisions made by billions of logic gates in parallel.
This article describes the seven gates that cover all of combinational logic, derives their truth tables, shows how to build each one out of transistors (the physics link), proves that NAND alone is universal, and finishes with a worked derivation of a half-adder — the circuit that makes binary arithmetic possible. The previous article built the switching transistor. This article builds the gate out of the switch and digital arithmetic out of the gate.
Binary logic: 0 and 1 as voltages
The discipline that makes digital electronics reliable is committing to just two voltage states and throwing away everything in between. In a 5 V TTL (transistor-transistor logic) system:
- Any output voltage below about 0.4 V is a definite LOW — a logic 0.
- Any output voltage above about 2.4 V is a definite HIGH — a logic 1.
- The region between 0.8 V and 2.0 V is forbidden at a steady state — no well-behaved gate parks its output there.
The two-level discipline is not arbitrary. It is what makes digital robust. An analog signal traveling down a long wire picks up noise — radiation from the Delhi metro, the hum of a 50 Hz refrigerator compressor, cosmic rays — and arrives smeared. A digital signal traveling down the same wire arrives smeared too, but the receiving gate re-interprets whatever it sees using the thresholds above and regenerates a clean 0 V or 5 V. The signal is restored, not just transported. This is why your WhatsApp messages arrive uncorrupted across continents of copper and fibre and radio: every hop is a gate that cleans up what came before.
Convention. Unless stated otherwise, this article uses positive logic: HIGH = 1, LOW = 0. (Some older systems used negative logic — HIGH = 0, LOW = 1 — which merely relabels everything and is mathematically equivalent. Indian school textbooks and the CBSE syllabus use positive logic.)
The seven gates
The NOT gate (inverter)
One input A, one output Y. The output is 1 when the input is 0 and vice versa — it inverts the input. In Boolean algebra: Y = \bar{A} (read "Y equals A-bar" or "not A").
Physically, a NOT gate in TTL is built from a single npn transistor: base = input A, collector = output Y through a pull-up resistor to the supply rail, emitter at ground. When A is HIGH the transistor saturates and pulls Y down to near 0 V. When A is LOW the transistor is in cutoff and Y is pulled up to the supply by the resistor. That is exactly the switching-transistor behaviour of the previous article.
The AND gate
Two inputs A and B, one output Y. Output is 1 only when both inputs are 1: Y = A \cdot B (read "A AND B", written as a product because Boolean AND is multiplicative in \{0, 1\}: 0 \cdot 0 = 0, 0 \cdot 1 = 0, 1 \cdot 0 = 0, 1 \cdot 1 = 1 — same as ordinary multiplication restricted to the two values).
Everyday reading: "You pass the class only if you attend AND score above 40" — both conditions must be TRUE to conclude TRUE.
The OR gate
Two inputs A and B, one output Y. Output is 1 when either input is 1 (or both): Y = A + B (read "A OR B"). The plus sign in Boolean algebra is OR, not arithmetic addition — note 1 + 1 = 1 in Boolean, not 2.
Everyday reading: "The alarm rings if the front door OR the back door is opened" — any one being TRUE suffices.
The NAND and NOR gates
NAND = NOT AND. Y = \overline{A \cdot B}. The output is 0 only when both inputs are 1. Symbol: an AND gate followed by the inverting bubble.
NOR = NOT OR. Y = \overline{A + B}. The output is 1 only when both inputs are 0. Symbol: an OR gate followed by the inverting bubble.
NAND and NOR look like minor variations of AND and OR — just an extra inverter bubble. In fact they are the more fundamental gates, as the universality section below will show. An integrated-circuit designer at the Intel India design centre in Bengaluru or the TI India campus in Bengaluru will almost always lay silicon out in NAND / NOR rather than AND / OR, because the former take fewer transistors (a CMOS NAND = 4 transistors; a CMOS AND = 6 transistors, because the AND is an internal NAND followed by an inverter).
The XOR and XNOR gates
XOR (exclusive OR): output is 1 when the inputs differ. Y = A \oplus B = A\bar{B} + \bar{A}B.
XNOR (equivalence, sometimes written "A \equiv B"): output is 1 when the inputs match. Y = \overline{A \oplus B} = AB + \bar{A}\bar{B}.
XOR is the difference-detector. It is the gate that tells you whether two bits disagree — and therefore the gate at the heart of addition (the sum bit of a half-adder is exactly A \oplus B, as derived in Example 2).
Explore the gates interactively
The interactive figure below lets you flip the two input bits A and B between 0 and 1 and watch how each gate's output changes live. It is the compressed truth table for all seven gates at once.
The universality of NAND
Here is the theorem that made digital electronics possible on an industrial scale: every Boolean function can be built out of NAND gates alone. The same is true for NOR. These are the universal gates — once you have NAND in silicon, you have every logic operation in silicon.
Proof is constructive — we actually build NOT, AND, and OR out of NAND, and from these three the rest of Boolean algebra follows (because any Boolean expression can be written in terms of AND, OR, NOT by Boole's theorem).
Step 1. Build NOT from NAND.
Tie both inputs of a NAND gate together: A = B. Then the output is \overline{A \cdot A} = \bar{A}.
Why: in Boolean algebra A \cdot A = A (idempotent law), so \overline{A \cdot A} = \bar{A}. Connecting both inputs of a NAND to the same signal gives you an inverter for free.
Step 2. Build AND from NAND.
Take a NAND gate followed by a NAND-inverter (from Step 1):
Why: double-negation: \bar{\bar{x}} = x. So a NAND followed by an inverter is an AND. Two NAND gates total.
Step 3. Build OR from NAND.
Use De Morgan's theorem: A + B = \overline{\bar{A} \cdot \bar{B}}. In words: A OR B is the same as NOT(NOT-A AND NOT-B). So invert each input first (using two NANDs as inverters), then NAND them together:
Why: De Morgan's theorem converts an OR into an AND of inverted signals, which is exactly what a NAND with inverted inputs does. Three NAND gates total.
Why this matters for a chip designer
When the 7400-series TTL chips were laid out in the 1970s, each family (AND, OR, NAND, NOR, etc.) needed its own mask set and its own fabrication run. Engineers realised that if they could build any logic function out of a single gate type, they only needed one kind of cell, replicated. That is why modern VLSI chips — the ones that come out of TSMC's fab in Taiwan or the Tata Electronics OSAT facility in Gujarat — are, under the hood, vast seas of NAND and NOR primitives, wired together at the metallisation layers to realise whatever the architect specified. Every processor instruction, every arithmetic operation, every branch decision inside the Apple M-series chips assembled at the Foxconn Chennai plant — made out of NAND.
Worked examples
Example 1: Building the majority-of-three function with basic gates
Design a circuit that outputs 1 if at least two of three inputs A, B, C are 1. This is the majority function — useful for three-vote redundancy in ISRO's fault-tolerant satellite controllers, where three copies of a computation are run and the output is taken by majority so that any single one failing does not corrupt the result.
Step 1. Write the truth table.
| A | B | C | Y |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 0 |
| 0 | 1 | 0 | 0 |
| 0 | 1 | 1 | 1 |
| 1 | 0 | 0 | 0 |
| 1 | 0 | 1 | 1 |
| 1 | 1 | 0 | 1 |
| 1 | 1 | 1 | 1 |
Why: list every possible input combination (there are 2^3 = 8 for three binary inputs) and mark the output as 1 only when at least two inputs are 1.
Step 2. Write the sum-of-products expression.
The output is 1 for the rows (0,1,1), (1,0,1), (1,1,0), (1,1,1). Write a product term for each and OR them together:
Why: each product term (minterm) is 1 only for its own input combination. OR-ing them together gives 1 whenever any of the specified input combinations occurs.
Step 3. Simplify using Boolean algebra.
Group and simplify. Notice that AB\bar{C} + ABC = AB(\bar{C} + C) = AB \cdot 1 = AB. Similarly \bar{A}BC + ABC = BC and A\bar{B}C + ABC = AC. (We use ABC three times — this is legal because X = X + X in Boolean.)
Why: the identity X + \bar{X} = 1 is Boolean algebra's most powerful simplification — it lets you eliminate a variable whenever both its values appear in otherwise-identical product terms. The result reads as "Y is 1 when any two of the three inputs are 1" — exactly the majority condition.
Step 4. Draw the circuit.
Three AND gates compute AB, BC, AC; a single OR gate (or a tree of two-input ORs) combines them.
Result: Y = AB + BC + AC. Three AND gates and one OR gate.
What this shows: Every Boolean function has a sum-of-products form, and a sum-of-products form maps directly to a two-level circuit: an AND plane feeding an OR plane. Every more-complex digital design — adders, multipliers, entire CPUs — is a hierarchy of such two-level blocks.
Example 2: The half-adder — from truth table to gates
Design a circuit that adds two one-bit binary numbers A and B and outputs a sum bit S and a carry bit C. This is the foundation of all binary arithmetic: wire two half-adders (plus an OR) in series and you get a full adder; wire 32 full adders in a chain and you get the 32-bit adder at the heart of every ARM-based Apple-silicon CPU packaged at Foxconn Sriperumbudur.
Step 1. Binary addition table.
| A | B | Carry C | Sum S | decimal interpretation |
|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 + 0 = 0 |
| 0 | 1 | 0 | 1 | 0 + 1 = 1 |
| 1 | 0 | 0 | 1 | 1 + 0 = 1 |
| 1 | 1 | 1 | 0 | 1 + 1 = 10 (binary two) |
Why: in binary, 1 + 1 = 10 (a two-bit answer: sum bit 0, carry bit 1). Every other combination produces a one-bit answer with carry 0. The fourth row is the only place where addition produces a carry — and that is the characteristic signature of binary arithmetic.
Step 2. Read off the Boolean expressions for S and C from the table.
Looking at the sum column: S = 1 when exactly one input is 1. That is the XOR:
Looking at the carry column: C = 1 only when both inputs are 1. That is the AND:
Why: recognise the patterns. The sum bit is 1 when the inputs differ — XOR. The carry bit is 1 when both are 1 — AND. No truth-table simplification is needed because each output's pattern is already a primitive gate.
Step 3. Draw the circuit.
Step 4. Verify with A = 1, B = 1.
S = 1 \oplus 1 = 0 and C = 1 \cdot 1 = 1. So the two-bit output is CS = 10, which in binary is decimal 2 — the correct sum of 1 and 1.
Why: this one verification pins down the whole table — if the carry row works and the XOR correctly outputs 0 for matching inputs, the other three rows follow from the primitive definitions.
Result: Half-adder = one XOR + one AND. Sum bit S = A \oplus B, carry bit C = AB. Two gates. Binary arithmetic in its minimum form.
What this shows: Every higher-level arithmetic — integer addition, multiplication, floating-point — is a cascade of these tiny half-adders and full-adders. The multiplication unit of a modern processor is roughly n^2 AND gates feeding a tree of O(n \log n) adders. All of it reduces, at the silicon level, to XOR and AND — and, via NAND-universality, to NAND alone.
Common confusions
-
"An OR gate outputs 2 when both inputs are 1." No. Boolean OR is not arithmetic addition in \mathbb{N}. The OR of 1 and 1 is 1, because OR answers "is at least one of them 1?" — and yes, so 1. Arithmetic sum is a separate operation, and that's why we need a half-adder to build it.
-
"NAND and AND are essentially the same." Not at the circuit level. A CMOS NAND is 4 transistors; a CMOS AND is a NAND followed by an inverter (6 transistors). A chip with a billion gates built from NANDs is materially smaller than one built from ANDs. This is why silicon designers prefer NAND/NOR and why schematic diagrams of real processors are seas of inverted-output gates.
-
"Logic 0 is exactly 0 V and logic 1 is exactly 5 V." In practice, neither — they are bands. A driving gate outputs "LOW" somewhere between 0 V and about 0.4 V, and a receiving gate interprets anything below 0.8 V as LOW. The band structure is what gives digital its noise immunity.
-
"All seven gates are fundamental." Only NAND or NOR alone is sufficient to build every other gate. NOT + AND + OR is also sufficient (classical Boolean algebra). But NAND and NOR each individually suffice — that is the deep fact. XOR is convenient but not fundamental.
-
"Modern computers use analog signals somewhere inside." Very nearly no. Inside a modern digital chip, essentially every signal is a two-valued Boolean. The only places analog creeps in are at the interfaces: the ADC that converts the microphone's voltage to bits, the DAC that converts bits back to speaker voltage, and the analog physics of the transistors themselves (which are analog devices forced into two-valued behaviour by large-signal operation). Everything between — the billions of gates of the CPU, GPU, and memory — is pure digital.
-
"A universal gate can replace any chip with one chip." No. Universality is about Boolean-function expressiveness, not about chip count. A ten-billion-transistor CPU cannot be collapsed to a single gate; it still needs its ten billion gates, they just all happen to be NANDs. Universality means the set of building blocks is complete, not that the count shrinks.
If you came here to understand gates, truth tables, and the universality of NAND, you have what you need. What follows is the internal CMOS transistor construction of a NAND, the algebraic formalism of Boolean algebra (including De Morgan's theorem proven in full), and a sketch of how sequential logic (flip-flops, clocked memory) emerges from combinational gates plus feedback.
CMOS NAND: the four-transistor construction
Every NAND gate in every modern chip is built in complementary metal-oxide-semiconductor (CMOS) technology. Four transistors total: two NMOS (n-channel) and two PMOS (p-channel). The NMOS conduct when their gate is HIGH; the PMOS conduct when their gate is LOW — complementary behaviours, hence the name.
The pull-up network (PMOS in parallel) and the pull-down network (NMOS in series) are duals: whenever one is conducting the other is not. This guarantees that the output is always driven — either to V_DD or to ground, never left floating. No static current flows in steady state (no path from V_DD to ground is ever open), which is why CMOS is the low-power technology that made battery-powered devices possible. The 4-transistor CMOS NAND is the building block you will find repeated several billion times on every chip in every Indian-made smartphone.
Boolean algebra — the algebra of digital circuits
Boolean algebra obeys a small, complete set of laws. The five you need to know — and that a CBSE physics exam will ask you to state — are:
- Commutative. A + B = B + A, A \cdot B = B \cdot A. Order of inputs doesn't matter.
- Associative. A + (B + C) = (A + B) + C, A(BC) = (AB)C. Grouping doesn't matter.
- Distributive. A(B + C) = AB + AC. Multiplication distributes over addition. (But also, surprisingly, A + BC = (A+B)(A+C) — this "dual distributive" law does not hold in ordinary arithmetic but does hold in Boolean algebra.)
- Identity / complement. A + 0 = A, A \cdot 1 = A, A + \bar{A} = 1, A \cdot \bar{A} = 0.
- De Morgan's theorem. \overline{A + B} = \bar{A} \cdot \bar{B} and \overline{A \cdot B} = \bar{A} + \bar{B}.
De Morgan's theorem — a truth-table proof
The easiest proof is exhaustive: check all four rows of A, B and verify each side.
| A | B | A+B | NOT(A+B) | NOT A | NOT B | (NOT A)·(NOT B) |
|---|---|---|---|---|---|---|
| 0 | 0 | 0 | 1 | 1 | 1 | 1 |
| 0 | 1 | 1 | 0 | 1 | 0 | 0 |
| 1 | 0 | 1 | 0 | 0 | 1 | 0 |
| 1 | 1 | 1 | 0 | 0 | 0 | 0 |
Columns 4 and 7 are identical — \overline{A+B} = \bar{A}\bar{B}. A matching table proves \overline{AB} = \bar{A} + \bar{B}.
Why: De Morgan's theorem is the bridge between OR-reasoning and AND-reasoning. It is exactly the rule you need to turn an OR into an AND with inverted inputs and outputs — which is how you build OR out of NAND (see the NAND-universality proof above). It is also what lets VLSI designers use "negative logic" interchangeably with positive logic by just re-labelling signals.
Sequential logic — adding memory
The gates on this page are combinational: their output is a pure function of their current inputs, no memory. Wire a gate's output back to its own input (with a little delay) and you get a latch — a one-bit memory. A cross-coupled pair of NOR gates is the SR latch: set inputs S and R both LOW and the output holds its previous state. Clock a pair of latches and you get a D flip-flop, the register cell that stores one bit and updates only on a clock edge. Chain 64 flip-flops and you have a 64-bit register.
Sequential logic is what turns combinational gates into the CPUs, memories, and state machines of a real computer. The details belong to a separate article; the key idea here is that feedback plus delay equals memory, and memory is what distinguishes a computer from a calculator.
Speed limits: the gate delay
Every gate has a propagation delay — the time from an input change to the corresponding output change. For a 2025 CMOS NAND in a 5 nm process at TSMC, t_{pd} \approx 10 picoseconds. Chain 100 gates in a deep combinational path and the signal takes about 1 nanosecond to ripple through. A 3 GHz clock has a period of about 330 picoseconds — so a CPU designer has a fierce budget: 33 gate-delays between clock edges. This is why modern processors are pipelined (deeply chained subtasks run in parallel, each stage short enough to fit in one clock cycle) and why architects at AMD's Bengaluru design centre or at the Intel India VLSI team spend as much time on timing closure as they do on functionality.
Where this leads next
- Transistors — Structure and Action — the BJT physics that the earliest TTL logic gates relied on, before CMOS took over.
- Transistor as Amplifier and Switch — the switching transistor that is the atomic building block of every logic gate.
- The p-n Junction Diode — the one-junction device that predates the transistor and the gate; diodes are still used in diode-transistor logic and in ESD protection structures at every gate's inputs.
- Semiconductors — Intrinsic and Extrinsic — the doping foundation on which every gate ultimately rests.