In short

A logic gate is a circuit whose inputs and outputs are restricted to two voltage levels — nominally 0 V (logic 0, LOW) and 5 V (logic 1, HIGH) in classic TTL, or 0 V and 3.3 V / 1.8 V / 1.2 V in modern CMOS — and whose output is a Boolean function of the inputs. Seven standard gates cover the whole of combinational digital logic:

Gate Symbol expression When output is 1
NOT (inverter) \bar{A} input is 0
AND A \cdot B both inputs are 1
OR A + B either input is 1
NAND \overline{A \cdot B} NOT AND — 0 only when both are 1
NOR \overline{A + B} NOT OR — 1 only when both are 0
XOR A \oplus B the inputs differ
XNOR \overline{A \oplus B} the inputs are equal

The universality result is the single most important theorem of digital electronics: every Boolean function — and therefore every digital computer, including every processor etched at TSMC and the Intel India design centres in Bengaluru — can be built out of NAND gates alone. Or out of NOR gates alone. You do not need a menagerie of gates; you need one, replicated a billion times. From the smartphone in your pocket, to the signal-conditioning boards in ISRO's Aditya-L1 solar observatory, to the card-reader at every metro station in Delhi — the entire digital world is recursive NAND.

Open up any electronic device built in the last fifty years — the TV remote lying on the charpai, the Airtel set-top box under your television, the Bluetooth speaker playing filmi songs through a Bajaj amplifier — and somewhere inside it there is a black rectangle with spider legs, a chip, and inside that chip there are millions of things called logic gates. Each one is, physically, a handful of transistors of the kind the previous article described. Each one takes as input one or two voltages and produces one voltage at its output. The inputs and outputs are not analog waveforms — they are only ever either a clearly-LOW voltage (a few hundred millivolts) or a clearly-HIGH voltage (a few volts). The gate's job is to apply one of a small set of Boolean rules: "output is HIGH only if both inputs are HIGH", "output is LOW only if both inputs are HIGH", "output is the opposite of the input".

That is it. That is the entire idea. From these trivially-simple behaviours, wired together in the right pattern, you can build an adder. Wire adders together and you get a multiplier. Wire multipliers and adders and some memory elements and you get a CPU. Wire a CPU to more memory and some input and output devices and you get a smartphone. Every piece of every computer ever built — from the ENIAC in 1945 to the neural-network accelerators on a 2025 NVIDIA GPU designed by engineers at the NVIDIA India office in Pune — is a tall tower of these two-voltage, one-bit-at-a-time decisions made by billions of logic gates in parallel.

This article describes the seven gates that cover all of combinational logic, derives their truth tables, shows how to build each one out of transistors (the physics link), proves that NAND alone is universal, and finishes with a worked derivation of a half-adder — the circuit that makes binary arithmetic possible. The previous article built the switching transistor. This article builds the gate out of the switch and digital arithmetic out of the gate.

Binary logic: 0 and 1 as voltages

The discipline that makes digital electronics reliable is committing to just two voltage states and throwing away everything in between. In a 5 V TTL (transistor-transistor logic) system:

Voltage bands for logic 0 and logic 1 in a 5 V TTL systemA vertical bar showing voltage levels from 0 to 5 volts, partitioned into three bands: low band 0 to 0.8 V as logic 0, middle band 0.8 to 2.0 V as forbidden region, and upper band 2.0 to 5 V as logic 1. 5.0 V 2.4 V (V_OH min) 2.0 V (V_IH min) 0.8 V (V_IL max) 0.4 V (V_OL max) 0.0 V HIGH — logic 1 forbidden (transient only) LOW — logic 0 (output LOW zone)
Voltage bands in a 5 V TTL system. A driving gate must output below 0.4 V for a LOW or above 2.4 V for a HIGH. A receiving gate interprets anything below 0.8 V as a LOW and anything above 2.0 V as a HIGH. The 0.4 V gap between the output and input thresholds is called the noise margin — the amount of corruption the signal can pick up on the way from gate to gate without being misread.

The two-level discipline is not arbitrary. It is what makes digital robust. An analog signal traveling down a long wire picks up noise — radiation from the Delhi metro, the hum of a 50 Hz refrigerator compressor, cosmic rays — and arrives smeared. A digital signal traveling down the same wire arrives smeared too, but the receiving gate re-interprets whatever it sees using the thresholds above and regenerates a clean 0 V or 5 V. The signal is restored, not just transported. This is why your WhatsApp messages arrive uncorrupted across continents of copper and fibre and radio: every hop is a gate that cleans up what came before.

Convention. Unless stated otherwise, this article uses positive logic: HIGH = 1, LOW = 0. (Some older systems used negative logic — HIGH = 0, LOW = 1 — which merely relabels everything and is mathematically equivalent. Indian school textbooks and the CBSE syllabus use positive logic.)

The seven gates

The NOT gate (inverter)

One input A, one output Y. The output is 1 when the input is 0 and vice versa — it inverts the input. In Boolean algebra: Y = \bar{A} (read "Y equals A-bar" or "not A").

NOT gate symbol and truth tableTriangle symbol with a small circle at the output, showing a single input A and output Y equal to not A. A truth table to the right lists A=0 → Y=1 and A=1 → Y=0. A Y = Ā A Y 0 1 1 0
The NOT gate. Symbol: a triangle with a small inverting bubble at the output. Truth table: two rows, because a single input has two possible values.

Physically, a NOT gate in TTL is built from a single npn transistor: base = input A, collector = output Y through a pull-up resistor to the supply rail, emitter at ground. When A is HIGH the transistor saturates and pulls Y down to near 0 V. When A is LOW the transistor is in cutoff and Y is pulled up to the supply by the resistor. That is exactly the switching-transistor behaviour of the previous article.

The AND gate

Two inputs A and B, one output Y. Output is 1 only when both inputs are 1: Y = A \cdot B (read "A AND B", written as a product because Boolean AND is multiplicative in \{0, 1\}: 0 \cdot 0 = 0, 0 \cdot 1 = 0, 1 \cdot 0 = 0, 1 \cdot 1 = 1 — same as ordinary multiplication restricted to the two values).

AND gate symbol and truth tableD-shaped symbol of an AND gate with two inputs A and B and one output Y equal to A AND B. A truth table to the right lists the four input combinations, with Y equal to 1 only when both inputs are 1. A B Y = A·B A B Y 0 0 0 0 1 0 1 0 0 1 1 1
The AND gate. Symbol: a flat-back, curved-front shape (think "D"). Truth table: output is 1 only in the last row, where both inputs are 1.

Everyday reading: "You pass the class only if you attend AND score above 40" — both conditions must be TRUE to conclude TRUE.

The OR gate

Two inputs A and B, one output Y. Output is 1 when either input is 1 (or both): Y = A + B (read "A OR B"). The plus sign in Boolean algebra is OR, not arithmetic addition — note 1 + 1 = 1 in Boolean, not 2.

OR gate symbol and truth tableCurved shield-shaped symbol of an OR gate with two inputs A and B and one output Y equal to A OR B. A truth table to the right lists the four input combinations, with Y equal to 0 only when both inputs are 0. A B Y = A+B A B Y 0 0 0 0 1 1 1 0 1 1 1 1
The OR gate. Symbol: a curved-back, pointed-front shape. Truth table: output is 1 except in the first row, where both inputs are 0.

Everyday reading: "The alarm rings if the front door OR the back door is opened" — any one being TRUE suffices.

The NAND and NOR gates

NAND = NOT AND. Y = \overline{A \cdot B}. The output is 0 only when both inputs are 1. Symbol: an AND gate followed by the inverting bubble.

NOR = NOT OR. Y = \overline{A + B}. The output is 1 only when both inputs are 0. Symbol: an OR gate followed by the inverting bubble.

NAND and NOR gate symbols with truth tablesTop: NAND symbol (AND shape with an inverting bubble on the output) and its truth table with outputs 1, 1, 1, 0. Bottom: NOR symbol (OR shape with an inverting bubble on the output) and its truth table with outputs 1, 0, 0, 0. A B Y = A·B̄ NAND A B Y 0 0 1 0 1 1 1 0 1 1 1 0 A B Y = A+B̄ NOR A B Y 0 0 1 0 1 0 1 0 0 1 1 0
NAND (top) and NOR (bottom). Both are "complemented" versions of their basic cousins, and both are universal in a sense the plain AND and OR are not: every Boolean function can be built out of NAND alone, or out of NOR alone.

NAND and NOR look like minor variations of AND and OR — just an extra inverter bubble. In fact they are the more fundamental gates, as the universality section below will show. An integrated-circuit designer at the Intel India design centre in Bengaluru or the TI India campus in Bengaluru will almost always lay silicon out in NAND / NOR rather than AND / OR, because the former take fewer transistors (a CMOS NAND = 4 transistors; a CMOS AND = 6 transistors, because the AND is an internal NAND followed by an inverter).

The XOR and XNOR gates

XOR (exclusive OR): output is 1 when the inputs differ. Y = A \oplus B = A\bar{B} + \bar{A}B.

XNOR (equivalence, sometimes written "A \equiv B"): output is 1 when the inputs match. Y = \overline{A \oplus B} = AB + \bar{A}\bar{B}.

XOR and XNOR gate symbols with truth tablesTop: XOR symbol (OR shape with extra curved back stroke) with truth table 0 1 1 0. Bottom: XNOR symbol (XOR with inverting bubble) with truth table 1 0 0 1. A B Y = A⊕B XOR A B Y 0 0 0 0 1 1 1 0 1 1 1 0 A B Y = A⊙B XNOR A B Y 0 0 1 0 1 0 1 0 0 1 1 1
XOR (top): output is 1 when exactly one input is 1 — the "different" gate. XNOR (bottom): output is 1 when the inputs are the same — the "equal" gate. Both are compound gates that can be built from AND/OR/NOT.

XOR is the difference-detector. It is the gate that tells you whether two bits disagree — and therefore the gate at the heart of addition (the sum bit of a half-adder is exactly A \oplus B, as derived in Example 2).

Explore the gates interactively

The interactive figure below lets you flip the two input bits A and B between 0 and 1 and watch how each gate's output changes live. It is the compressed truth table for all seven gates at once.

Interactive: drag A and B between 0 and 1 to see every gate's output Two draggable bits A and B shown as circles at the top; below, seven gate outputs computed live: AND, OR, NOT A, NAND, NOR, XOR, XNOR. drag to change A = B = Outputs NOT A = A · B = A + B = NAND = NOR = A ⊕ B (XOR) = A ⊙ B (XNOR) = drag A or B left or right
Drag the red (A) or dark (B) circle to the left for 0 or to the right for 1. The right panel shows every gate's output live. Confirm by eye that NAND is 0 only when both inputs are 1, NOR is 1 only when both are 0, and XOR is 1 only when the two inputs differ.

The universality of NAND

Here is the theorem that made digital electronics possible on an industrial scale: every Boolean function can be built out of NAND gates alone. The same is true for NOR. These are the universal gates — once you have NAND in silicon, you have every logic operation in silicon.

Proof is constructive — we actually build NOT, AND, and OR out of NAND, and from these three the rest of Boolean algebra follows (because any Boolean expression can be written in terms of AND, OR, NOT by Boole's theorem).

Step 1. Build NOT from NAND.

Tie both inputs of a NAND gate together: A = B. Then the output is \overline{A \cdot A} = \bar{A}.

\text{NOT}(A) = \text{NAND}(A, A) = \overline{A \cdot A} = \bar{A}

Why: in Boolean algebra A \cdot A = A (idempotent law), so \overline{A \cdot A} = \bar{A}. Connecting both inputs of a NAND to the same signal gives you an inverter for free.

Step 2. Build AND from NAND.

Take a NAND gate followed by a NAND-inverter (from Step 1):

\text{AND}(A, B) = \text{NOT}(\text{NAND}(A, B)) = \overline{\overline{A \cdot B}} = A \cdot B

Why: double-negation: \bar{\bar{x}} = x. So a NAND followed by an inverter is an AND. Two NAND gates total.

Step 3. Build OR from NAND.

Use De Morgan's theorem: A + B = \overline{\bar{A} \cdot \bar{B}}. In words: A OR B is the same as NOT(NOT-A AND NOT-B). So invert each input first (using two NANDs as inverters), then NAND them together:

\text{OR}(A, B) = \text{NAND}(\text{NOT}(A), \text{NOT}(B)) = \text{NAND}(\bar{A}, \bar{B}) = \overline{\bar{A} \cdot \bar{B}} = A + B

Why: De Morgan's theorem converts an OR into an AND of inverted signals, which is exactly what a NAND with inverted inputs does. Three NAND gates total.

Building NOT, AND, and OR from NAND gatesThree panels showing: (left) a NAND gate with both inputs tied together forming a NOT gate; (middle) two NAND gates in series forming an AND gate; (right) three NAND gates arranged to form an OR gate via De Morgan's theorem. NOT from 1 NAND A Ā AND from 2 NAND A B A·B OR from 3 NAND (De Morgan) A B A+B
Building NOT, AND, and OR from NAND gates. With these three you can build every other Boolean function — so NAND alone is universal. A symmetric proof with NOR gives NOR-alone universality. Silicon designers exploit this: a 4-transistor CMOS NAND tile, replicated millions of times on a single chip, implements the entire logic of an Intel Core processor.

Why this matters for a chip designer

When the 7400-series TTL chips were laid out in the 1970s, each family (AND, OR, NAND, NOR, etc.) needed its own mask set and its own fabrication run. Engineers realised that if they could build any logic function out of a single gate type, they only needed one kind of cell, replicated. That is why modern VLSI chips — the ones that come out of TSMC's fab in Taiwan or the Tata Electronics OSAT facility in Gujarat — are, under the hood, vast seas of NAND and NOR primitives, wired together at the metallisation layers to realise whatever the architect specified. Every processor instruction, every arithmetic operation, every branch decision inside the Apple M-series chips assembled at the Foxconn Chennai plant — made out of NAND.

Worked examples

Example 1: Building the majority-of-three function with basic gates

Design a circuit that outputs 1 if at least two of three inputs A, B, C are 1. This is the majority function — useful for three-vote redundancy in ISRO's fault-tolerant satellite controllers, where three copies of a computation are run and the output is taken by majority so that any single one failing does not corrupt the result.

Step 1. Write the truth table.

A B C Y
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1

Why: list every possible input combination (there are 2^3 = 8 for three binary inputs) and mark the output as 1 only when at least two inputs are 1.

Step 2. Write the sum-of-products expression.

The output is 1 for the rows (0,1,1), (1,0,1), (1,1,0), (1,1,1). Write a product term for each and OR them together:

Y = \bar{A}BC + A\bar{B}C + AB\bar{C} + ABC

Why: each product term (minterm) is 1 only for its own input combination. OR-ing them together gives 1 whenever any of the specified input combinations occurs.

Step 3. Simplify using Boolean algebra.

Group and simplify. Notice that AB\bar{C} + ABC = AB(\bar{C} + C) = AB \cdot 1 = AB. Similarly \bar{A}BC + ABC = BC and A\bar{B}C + ABC = AC. (We use ABC three times — this is legal because X = X + X in Boolean.)

Y = AB + BC + AC

Why: the identity X + \bar{X} = 1 is Boolean algebra's most powerful simplification — it lets you eliminate a variable whenever both its values appear in otherwise-identical product terms. The result reads as "Y is 1 when any two of the three inputs are 1" — exactly the majority condition.

Step 4. Draw the circuit.

Three AND gates compute AB, BC, AC; a single OR gate (or a tree of two-input ORs) combines them.

Majority-of-three circuit using three AND gates and one OR gateA logic schematic with three AND gates (computing AB, BC, and AC) feeding into a three-input OR gate whose output is Y equal to majority of A, B, C. A B C A·B B·C A·C Y 3-input OR Y = AB + BC + AC (majority of three)
Three AND gates compute the pairwise products $AB$, $BC$, $AC$; a three-input OR gate combines them. The output $Y$ is 1 exactly when at least two of the three inputs are 1. This is the fault-tolerant majority voter used in ISRO's triple-modular redundancy flight computers — the trick that kept Chandrayaan-3 flying even when a cosmic-ray-upset single-bit failure occurred.

Result: Y = AB + BC + AC. Three AND gates and one OR gate.

What this shows: Every Boolean function has a sum-of-products form, and a sum-of-products form maps directly to a two-level circuit: an AND plane feeding an OR plane. Every more-complex digital design — adders, multipliers, entire CPUs — is a hierarchy of such two-level blocks.

Example 2: The half-adder — from truth table to gates

Design a circuit that adds two one-bit binary numbers A and B and outputs a sum bit S and a carry bit C. This is the foundation of all binary arithmetic: wire two half-adders (plus an OR) in series and you get a full adder; wire 32 full adders in a chain and you get the 32-bit adder at the heart of every ARM-based Apple-silicon CPU packaged at Foxconn Sriperumbudur.

Step 1. Binary addition table.

A B Carry C Sum S decimal interpretation
0 0 0 0 0 + 0 = 0
0 1 0 1 0 + 1 = 1
1 0 0 1 1 + 0 = 1
1 1 1 0 1 + 1 = 10 (binary two)

Why: in binary, 1 + 1 = 10 (a two-bit answer: sum bit 0, carry bit 1). Every other combination produces a one-bit answer with carry 0. The fourth row is the only place where addition produces a carry — and that is the characteristic signature of binary arithmetic.

Step 2. Read off the Boolean expressions for S and C from the table.

Looking at the sum column: S = 1 when exactly one input is 1. That is the XOR:

S = A \oplus B

Looking at the carry column: C = 1 only when both inputs are 1. That is the AND:

C = A \cdot B

Why: recognise the patterns. The sum bit is 1 when the inputs differ — XOR. The carry bit is 1 when both are 1 — AND. No truth-table simplification is needed because each output's pattern is already a primitive gate.

Step 3. Draw the circuit.

Half-adder circuit: one XOR for sum, one AND for carrySchematic of a half-adder: two inputs A and B feed an XOR gate producing sum bit S and an AND gate producing carry bit C. A B S = A ⊕ B XOR C = A · B AND
The half-adder. One XOR gate produces the sum bit $S = A \oplus B$; one AND gate produces the carry bit $C = A \cdot B$. Two gates, two bits — this is binary arithmetic in its minimum form. Chain two half-adders plus an OR gate and you get a full adder that handles a carry-in; chain 32 or 64 full adders and you have the arithmetic unit of a complete CPU.

Step 4. Verify with A = 1, B = 1.

S = 1 \oplus 1 = 0 and C = 1 \cdot 1 = 1. So the two-bit output is CS = 10, which in binary is decimal 2 — the correct sum of 1 and 1.

Why: this one verification pins down the whole table — if the carry row works and the XOR correctly outputs 0 for matching inputs, the other three rows follow from the primitive definitions.

Result: Half-adder = one XOR + one AND. Sum bit S = A \oplus B, carry bit C = AB. Two gates. Binary arithmetic in its minimum form.

What this shows: Every higher-level arithmetic — integer addition, multiplication, floating-point — is a cascade of these tiny half-adders and full-adders. The multiplication unit of a modern processor is roughly n^2 AND gates feeding a tree of O(n \log n) adders. All of it reduces, at the silicon level, to XOR and AND — and, via NAND-universality, to NAND alone.

Common confusions

If you came here to understand gates, truth tables, and the universality of NAND, you have what you need. What follows is the internal CMOS transistor construction of a NAND, the algebraic formalism of Boolean algebra (including De Morgan's theorem proven in full), and a sketch of how sequential logic (flip-flops, clocked memory) emerges from combinational gates plus feedback.

CMOS NAND: the four-transistor construction

Every NAND gate in every modern chip is built in complementary metal-oxide-semiconductor (CMOS) technology. Four transistors total: two NMOS (n-channel) and two PMOS (p-channel). The NMOS conduct when their gate is HIGH; the PMOS conduct when their gate is LOW — complementary behaviours, hence the name.

CMOS NAND gate built from two PMOS transistors in parallel (pull-up) and two NMOS transistors in series (pull-down)Schematic with VDD at top, two PMOS transistors side by side connecting VDD to the output, output Y going to the right, and two NMOS transistors in series between output and ground. Inputs A and B drive the gates of one PMOS and one NMOS each. V_DD (+5 V) A PMOS B Y A NMOS B GND pull-up: P-MOS in parallel (both OFF only if A=B=1) pull-down: N-MOS in series (both ON only if A=B=1)
CMOS NAND. Two PMOS transistors in parallel between V_DD and the output — either one conducting pulls the output HIGH. Two NMOS transistors in series between output and ground — both must conduct to pull the output LOW. The only input combination that turns OFF all PMOS and ON all NMOS is $A = B = 1$, which is the only row of the NAND truth table where the output goes LOW. Every other row has at least one PMOS on, pulling the output HIGH — perfect NAND behaviour.

The pull-up network (PMOS in parallel) and the pull-down network (NMOS in series) are duals: whenever one is conducting the other is not. This guarantees that the output is always driven — either to V_DD or to ground, never left floating. No static current flows in steady state (no path from V_DD to ground is ever open), which is why CMOS is the low-power technology that made battery-powered devices possible. The 4-transistor CMOS NAND is the building block you will find repeated several billion times on every chip in every Indian-made smartphone.

Boolean algebra — the algebra of digital circuits

Boolean algebra obeys a small, complete set of laws. The five you need to know — and that a CBSE physics exam will ask you to state — are:

  1. Commutative. A + B = B + A, A \cdot B = B \cdot A. Order of inputs doesn't matter.
  2. Associative. A + (B + C) = (A + B) + C, A(BC) = (AB)C. Grouping doesn't matter.
  3. Distributive. A(B + C) = AB + AC. Multiplication distributes over addition. (But also, surprisingly, A + BC = (A+B)(A+C) — this "dual distributive" law does not hold in ordinary arithmetic but does hold in Boolean algebra.)
  4. Identity / complement. A + 0 = A, A \cdot 1 = A, A + \bar{A} = 1, A \cdot \bar{A} = 0.
  5. De Morgan's theorem. \overline{A + B} = \bar{A} \cdot \bar{B} and \overline{A \cdot B} = \bar{A} + \bar{B}.

De Morgan's theorem — a truth-table proof

The easiest proof is exhaustive: check all four rows of A, B and verify each side.

A B A+B NOT(A+B) NOT A NOT B (NOT A)·(NOT B)
0 0 0 1 1 1 1
0 1 1 0 1 0 0
1 0 1 0 0 1 0
1 1 1 0 0 0 0

Columns 4 and 7 are identical — \overline{A+B} = \bar{A}\bar{B}. A matching table proves \overline{AB} = \bar{A} + \bar{B}.

Why: De Morgan's theorem is the bridge between OR-reasoning and AND-reasoning. It is exactly the rule you need to turn an OR into an AND with inverted inputs and outputs — which is how you build OR out of NAND (see the NAND-universality proof above). It is also what lets VLSI designers use "negative logic" interchangeably with positive logic by just re-labelling signals.

Sequential logic — adding memory

The gates on this page are combinational: their output is a pure function of their current inputs, no memory. Wire a gate's output back to its own input (with a little delay) and you get a latch — a one-bit memory. A cross-coupled pair of NOR gates is the SR latch: set inputs S and R both LOW and the output holds its previous state. Clock a pair of latches and you get a D flip-flop, the register cell that stores one bit and updates only on a clock edge. Chain 64 flip-flops and you have a 64-bit register.

Sequential logic is what turns combinational gates into the CPUs, memories, and state machines of a real computer. The details belong to a separate article; the key idea here is that feedback plus delay equals memory, and memory is what distinguishes a computer from a calculator.

Speed limits: the gate delay

Every gate has a propagation delay — the time from an input change to the corresponding output change. For a 2025 CMOS NAND in a 5 nm process at TSMC, t_{pd} \approx 10 picoseconds. Chain 100 gates in a deep combinational path and the signal takes about 1 nanosecond to ripple through. A 3 GHz clock has a period of about 330 picoseconds — so a CPU designer has a fierce budget: 33 gate-delays between clock edges. This is why modern processors are pipelined (deeply chained subtasks run in parallel, each stage short enough to fit in one clock cycle) and why architects at AMD's Bengaluru design centre or at the Intel India VLSI team spend as much time on timing closure as they do on functionality.

Where this leads next