ECE 124: Digital Circuits and Systems
William (Bill) Bishop
Estimated study time: 52 minutes
Table of contents
Sources and References
Primary textbook — Stephen Brown and Zvonko Vranesic, Fundamentals of Digital Logic with Verilog Design, 2025 Release ISE (ISBN 978-1-265-07067-0), McGraw-Hill.
Online resources — MIT OpenCourseWare 6.004 “Computation Structures” (https://ocw.mit.edu/courses/6-004-computation-structures-spring-2017/); Carnegie Mellon 18-240 “Structure and Design of Digital Systems” course notes (https://course.ece.cmu.edu/~ece240/); UC Berkeley CS 61C “Great Ideas in Computer Architecture” (https://inst.eecs.berkeley.edu/~cs61c/); Nand2Tetris open materials (https://www.nand2tetris.org/).
Chapter 1: Foundations — Number Systems and Boolean Algebra
1.1 Why Digital?
Modern computing, communication, and control systems are built almost entirely from digital circuits. A digital system represents information using a finite, discrete alphabet rather than a continuous range of voltages. In practice, every signal is restricted to one of two voltage levels — conventionally called logic 0 and logic 1, LOW and HIGH, or simply false and true. This binary representation is robust because a small amount of electrical noise is tolerated: as long as a voltage that should be HIGH does not fall below a threshold, and one that should be LOW does not rise above a threshold, the circuit reads the correct logical value. The engineering discipline of digital design is therefore the art and science of building reliable, fast, and area-efficient circuits that manipulate binary information.
ECE 124 develops the full toolchain: from the mathematical laws of Boolean algebra through gate-level combinational design, sequential logic, and finite state machines, to a first encounter with hardware description languages (HDL) and timing analysis. These skills underpin every level of the computing stack, from the transistor all the way up to processors and system-on-chip designs.
1.2 Positional Number Systems
All positional number systems share the same idea: a digit in position \(k\) (counting from zero at the rightmost position) contributes the digit value multiplied by \(r^k\), where \(r\) is the radix or base.
For a number \(N\) with digits \(d_{n-1} d_{n-2} \ldots d_1 d_0\) in base \(r\),
\[ N = \sum_{k=0}^{n-1} d_k \cdot r^k \]The four bases used in digital design are:
| Base | Radix | Digits | Suffix/prefix |
|---|---|---|---|
| Decimal | 10 | 0–9 | (none) |
| Binary | 2 | 0–1 | \(_2\) or 0b |
| Octal | 8 | 0–7 | \(_8\) or 0o |
| Hexadecimal | 16 | 0–9, A–F | \(_{16}\) or 0x |
1.2.1 Binary-to-Decimal Conversion
Evaluate the polynomial directly.
1.2.2 Decimal-to-Binary Conversion (Repeated Division)
Divide repeatedly by 2, collecting remainders from bottom to top.
\(43 \div 2 = 21\) remainder \(1\) → LSB
\(21 \div 2 = 10\) remainder \(1\)
\(10 \div 2 = 5\) remainder \(0\)
\( 5 \div 2 = 2\) remainder \(1\)
\( 2 \div 2 = 1\) remainder \(0\)
\( 1 \div 2 = 0\) remainder \(1\) → MSB
Reading remainders from bottom to top: \(43 = 101011_2\).
1.2.3 Hexadecimal and Octal as Shorthand for Binary
Hexadecimal groups bits in sets of four; octal groups bits in sets of three. This makes reading long binary strings much easier in practice.
1.2.4 Gray Code
Gray code is a binary encoding in which consecutive values differ in exactly one bit. This property eliminates glitches in mechanical position encoders and other analog-to-digital interfaces, because passing through the boundary between two numbers changes only one signal at a time.
The \(n\)-bit Gray code for integer \(i\) is obtained by XOR-ing \(i\) with \(i\) right-shifted by one:
\[ G_k = B_k \oplus B_{k+1} \]where \(B_k\) is the \(k\)-th bit of the standard binary representation. For example:
| Decimal | Binary | Gray |
|---|---|---|
| 0 | 000 | 000 |
| 1 | 001 | 001 |
| 2 | 010 | 011 |
| 3 | 011 | 010 |
| 4 | 100 | 110 |
| 5 | 101 | 111 |
| 6 | 110 | 101 |
| 7 | 111 | 100 |
Notice that between any two adjacent rows, exactly one bit changes. Gray code appears again when labeling Karnaugh map axes (Section 3.1).
1.3 Boolean Algebra
Boolean algebra, developed by George Boole in 1854 and adapted for switching circuits by Claude Shannon in 1938, provides the mathematical foundation for all of digital design.
Commutativity: \(x + y = y + x\); \(x \cdot y = y \cdot x\)
Distributivity: \(x \cdot (y + z) = x \cdot y + x \cdot z\); \(x + (y \cdot z) = (x+y)(x+z)\)
Identity: \(x + 0 = x\); \(x \cdot 1 = x\)
Complement: \(x + \overline{x} = 1\); \(x \cdot \overline{x} = 0\)
From these axioms one derives all the useful identities used in circuit simplification. The most important are collected below.
| Identity | AND form | OR form |
|---|---|---|
| Idempotency | \(x \cdot x = x\) | \(x + x = x\) |
| Null element | \(x \cdot 0 = 0\) | \(x + 1 = 1\) |
| Involution | \(\overline{\overline{x}} = x\) | — |
| Absorption | \(x \cdot (x + y) = x\) | \(x + x \cdot y = x\) |
| De Morgan’s | \(\overline{x \cdot y} = \overline{x} + \overline{y}\) | \(\overline{x + y} = \overline{x} \cdot \overline{y}\) |
De Morgan’s theorem is especially critical because it tells designers how to convert between AND-based and OR-based expressions, and it underlies the duality principle: any valid Boolean identity remains valid when AND and OR are swapped and 0 and 1 are swapped simultaneously.
1.4 Logic Gates
Physical implementations of Boolean operations are called logic gates. The standard set of primitive gates is:
- NOT (inverter): one input, output is its complement.
- AND: output is 1 only when all inputs are 1.
- OR: output is 1 when at least one input is 1.
- NAND: AND followed by NOT; output is 0 only when all inputs are 1.
- NOR: OR followed by NOT; output is 1 only when all inputs are 0.
- XOR (exclusive-OR): output is 1 when an odd number of inputs are 1.
- XNOR: complement of XOR.
1.5 Minterms, Maxterms, and Canonical Forms
Every Boolean function of \(n\) variables can be expressed in two canonical forms.
A minterm \(m_i\) is a product term in which every variable appears exactly once, either complemented or uncomplemented, corresponding to a row of the truth table where the output is 1. The index \(i\) is the decimal value of the input combination.
A maxterm \(M_i\) is a sum term in which every variable appears exactly once, corresponding to a row where the output is 0.
| \(A\) | \(B\) | \(C\) | \(f\) |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 1 |
| 0 | 1 | 0 | 1 |
| 0 | 1 | 1 | 0 |
| 1 | 0 | 0 | 1 |
| 1 | 0 | 1 | 0 |
| 1 | 1 | 0 | 0 |
| 1 | 1 | 1 | 1 |
The canonical sum of products (SOP) is \(f = \sum m(1, 2, 4, 7) = \overline{A}\,\overline{B}C + \overline{A}B\overline{C} + A\overline{B}\,\overline{C} + ABC\).
The canonical product of sums (POS) is \(f = \prod M(0, 3, 5, 6)\).
The canonical SOP is also called the sum of minterms form. While unique and systematic, it is rarely minimal — circuit simplification is needed to reduce gate count and propagation delay.
Chapter 2: Logic Simplification
2.1 Algebraic Simplification
Boolean identities can sometimes reduce an expression by hand. The key step is recognizing when the absorption law, consensus theorem, or a complementation cancellation applies.
Group the last two terms: \(A\overline{B}C + ABC = AC(\overline{B} + B) = AC\). Then \(f = \overline{A}BC + AC\). Factor \(C\): \(f = C(\overline{A}B + A) = C(\overline{A}B + A)\). By the consensus identity, \(\overline{A}B + A = A + B\), so \(f = C(A + B) = AC + BC\). Both terms are simpler than the original three-term SOP.
Algebraic manipulation is powerful but requires insight and lacks a systematic guarantee of minimality. The Karnaugh map and Quine-McCluskey method provide structured alternatives.
2.2 Karnaugh Maps
A Karnaugh map (K-map) is a graphical representation of a truth table in which adjacent cells differ in exactly one variable — a direct analogue of Gray code ordering. The map allows designers to identify and group adjacent 1-cells (for SOP) or 0-cells (for POS) by inspection.
2.2.1 Rules for K-map Grouping
- Only group 1-cells (for SOP minimization).
- Every group must be a rectangle whose size is a power of two: 1, 2, 4, 8, 16, …
- Each group must be as large as possible (a prime implicant).
- Every 1-cell must be covered by at least one group.
- The expression for a group eliminates the variables that change within the group; the remaining variables appear complemented or uncomplemented according to their fixed value in the group.
BC
A 00 01 11 10
0 1 1 0 1
1 1 0 0 0
The four-cell group covering rows \(A=0,1\) and column \(BC=00\) gives term \(\overline{B}\,\overline{C}\). The two-cell group covering \(A=0\), columns \(BC=00\) and \(BC=01\) (i.e. \(A=0, C=0\)) yields \(\overline{A}\,\overline{C}\) — but this is dominated by the larger group. After finding all prime implicants and selecting a cover, the minimal SOP is \(f = \overline{B}\,\overline{C} + \overline{A}\,\overline{B}\). (Verify by checking: minterm 1 is covered by \(\overline{A}\,\overline{B}\); minterm 2 by \(\overline{B}\,\overline{C}\); minterm 4 by \(\overline{B}\,\overline{C}\).)
2.2.2 Don’t Care Conditions
Sometimes certain input combinations cannot occur (e.g., illegal BCD codes), or the output value is irrelevant for those inputs. These are marked as don’t cares (typically written as \(d\) or \(\times\) in the truth table). In a K-map, don’t cares may be included in a group if doing so creates a larger group, or left out — whichever leads to a simpler expression. They must not be the only cells in a group (at least one required 1-cell must be included).
2.2.3 Multiple-Output Minimization
When a circuit implements several functions sharing the same inputs, it may be advantageous to share logic terms between outputs even if a given term is not individually minimal for a single output. The general principle is that a product term shared across two outputs costs fewer gate inputs than two separate product terms.
2.3 Multilevel Synthesis and NAND/NOR Networks
Two-level SOP implementations map directly to AND-OR networks, which in turn can be converted to NAND-NAND networks by double-inverting each connection (inverting the AND outputs and inverting again at the OR stage). This transformation exploits De Morgan’s law and is important because NAND gates are the basic building block of many standard-cell libraries.
Similarly, POS circuits can be implemented as NOR-NOR networks. Multilevel synthesis trades off area (fewer literals, more gate stages) against speed (more propagation delay), a tension revisited throughout digital design.
2.4 Quine-McCluskey Tabular Method
The Quine-McCluskey (QM) method is an algorithmic approach to minimization that is suitable for automation and for functions with more than four or five variables, where K-maps become unwieldy.
Step 1: List all minterms in binary. Group them by the number of 1-bits (the Hamming weight).
Step 2: Combine pairs from adjacent groups that differ in exactly one bit position. Replace the differing bit with a dash (–) and check off the combined minterms. Repeat, combining terms with one dash with other terms having a dash in the same position to get terms with two dashes.
Step 3: Prime implicants are any terms that were not checked off at the end of this process (i.e., they could not be combined further).
Step 4: Prime implicant chart. Rows are prime implicants; columns are original minterms. Mark cells where a prime implicant covers a minterm. A essential prime implicant is one that alone covers some minterm. Every essential prime implicant must be in the final cover. After including essentials, select additional prime implicants to cover remaining minterms, minimizing the total number of literals. This selection step is itself a covering problem, solved by Petrick’s method for small instances.
Chapter 3: Combinational Circuit Building Blocks
3.1 Multiplexers and Shannon’s Expansion Theorem
A multiplexer (MUX) selects one of \(2^n\) data inputs using \(n\) select (control) lines and routes it to a single output. A 2-to-1 MUX with select \(S\), data inputs \(D_0\) and \(D_1\) has the equation:
\[ Y = \overline{S} \cdot D_0 + S \cdot D_1 \]A 4-to-1 MUX with selects \(S_1 S_0\) generalizes to \(Y = \overline{S_1}\,\overline{S_0} D_0 + \overline{S_1} S_0 D_1 + S_1 \overline{S_0} D_2 + S_1 S_0 D_3\).
The two terms are called the cofactors of \(f\) with respect to \(x_1\).
Shannon’s expansion shows that any function can be implemented by a MUX tree: use the variables one by one as select lines, and connect the terminal cofactors (which are functions of the remaining variables) to the data inputs. A single 4-to-1 MUX can implement any 3-variable function by using two of the variables as selects and connecting combinations of the third variable or its complement to the data inputs.
3.2 Decoders and Encoders
A decoder takes an \(n\)-bit binary input and asserts exactly one of \(2^n\) output lines — the one corresponding to the binary value of the input. Decoders are the dual building blocks to MUXes: while a MUX selects one input to route to one output, a decoder maps one input to one of many outputs.
A 2-to-4 decoder with inputs \(A_1, A_0\) produces:
\[ Y_0 = \overline{A_1}\,\overline{A_0}, \quad Y_1 = \overline{A_1} A_0, \quad Y_2 = A_1 \overline{A_0}, \quad Y_3 = A_1 A_0 \]The outputs are exactly the four minterms of two variables, which means any sum-of-minterms Boolean function can be built with a decoder plus an OR gate.
An encoder performs the inverse operation: it has \(2^n\) input lines (at most one asserted at a time) and outputs the \(n\)-bit binary code of the asserted input. A priority encoder handles the case where multiple inputs may be asserted simultaneously by encoding the highest-priority (usually highest-numbered) active input.
A demultiplexer (DEMUX) routes a single data input to one of \(2^n\) outputs selected by \(n\) control lines. It is the structural inverse of a MUX.
3.3 Binary Adders
3.3.1 Half Adder
A half adder computes the sum of two single bits:
\[ \text{Sum} = A \oplus B, \quad \text{Carry} = A \cdot B \]It has no provision for an incoming carry, so it is used only for the least-significant bit position.
3.3.2 Full Adder
A full adder adds three bits: two data bits \(A\) and \(B\) and a carry-in \(C_{in}\):
\[ \text{Sum} = A \oplus B \oplus C_{in} \]\[ C_{out} = A \cdot B + (A \oplus B) \cdot C_{in} \]3.3.3 Ripple-Carry Adder
Chaining \(n\) full adders, with the carry-out of stage \(k\) feeding the carry-in of stage \(k+1\), gives an \(n\)-bit ripple-carry adder. It is simple to design but slow: the worst-case path propagates a carry from the LSB all the way to the MSB, traversing \(n\) full adder delays.
3.3.4 Timing Considerations
For an \(n\)-bit ripple-carry adder, the worst-case propagation delay is:
\[ t_{add} = t_{HA} + (n-1) \cdot t_{FA,carry} \]where \(t_{FA,carry}\) is the carry propagation delay through a single full adder. For large \(n\), this becomes prohibitive. Carry-lookahead adders (CLA) break the dependency by computing carries in parallel using generate (\(G_k = A_k B_k\)) and propagate (\(P_k = A_k \oplus B_k\)) signals:
\[ C_{k+1} = G_k + P_k C_k \]Unrolling this recurrence for a 4-bit block allows all carries to be computed in two gate delays regardless of the word length within the block.
3.4 Comparators
A binary comparator determines the relative magnitude of two \(n\)-bit numbers \(A\) and \(B\), producing outputs for \(A = B\), \(A > B\), and \(A < B\). Equality is checked bit-by-bit: \(A = B\) if and only if \(A_k \oplus B_k = 0\) for all \(k\), i.e., the XNOR of all bit pairs is 1.
Chapter 4: Number Representation and Arithmetic
4.1 Unsigned Numbers
An \(n\)-bit unsigned number can represent integers in the range \([0, 2^n - 1]\). Binary addition wraps around modulo \(2^n\); an overflow is detected by a carry-out from the MSB.
4.2 Signed Number Representations
4.2.1 Sign-Magnitude
Reserve the MSB as a sign bit (0 = positive, 1 = negative); the remaining bits give the magnitude. This representation has two encodings for zero (\(+0\) and \(-0\)) and requires different hardware for addition and subtraction.
4.2.2 Two’s Complement
The universally preferred representation for signed integers in hardware. The range of \(n\)-bit two’s complement is \([-2^{n-1}, 2^{n-1} - 1]\).
Equivalently, the value represented by an \(n\)-bit pattern \(b_{n-1} b_{n-2} \ldots b_0\) is:
\[ V = -b_{n-1} \cdot 2^{n-1} + \sum_{k=0}^{n-2} b_k \cdot 2^k \]The key advantage is that the same binary adder hardware correctly adds two’s complement numbers without modification. Overflow occurs when the carry into the sign bit differs from the carry out of the sign bit.
4.2.3 Sign Extension
To extend an \(n\)-bit two’s complement number to \(m > n\) bits, copy the sign bit into all new upper positions. This preserves the numerical value.
4.3 Binary Multiplication
Unsigned binary multiplication mirrors long multiplication in decimal: multiply the multiplicand by each bit of the multiplier, shift the partial product left by the bit position, and sum all partial products. An \(n \times n\) multiply produces a \(2n\)-bit result. The partial products are either 0 (multiplier bit = 0) or a shifted copy of the multiplicand (multiplier bit = 1), so the partial product terms are generated with AND gates and summed with an adder tree.
Chapter 5: Sequential Logic — Storage Elements
5.1 The Need for Memory
Combinational circuits compute outputs that are instantaneous functions of their current inputs — they have no memory. Sequential circuits add state: their outputs depend on both current inputs and the history of past inputs stored in memory elements. This is the essential step from computing individual Boolean functions to implementing algorithms, protocols, and control systems.
5.2 Latches
A latch is a level-sensitive storage element: it samples its input and updates its state while a control signal (enable or clock) is at a particular level.
5.2.1 SR (Set-Reset) Latch
The SR latch is the fundamental bistable circuit. It has two cross-coupled NOR (or NAND) gates and two inputs, S (set) and R (reset), and two complementary outputs Q and \(\overline{Q}\).
- \(S=1, R=0\): forces \(Q=1\) (set state).
- \(S=0, R=1\): forces \(Q=0\) (reset state).
- \(S=0, R=0\): outputs hold their previous values (memory).
- \(S=1, R=1\): forbidden — both outputs driven to the same value, violating the complementary constraint and producing unpredictable behavior when inputs return to 00.
5.2.2 Gated D Latch
The gated D latch adds an enable input \(E\) to the SR latch. When \(E=1\) (enabled), the output \(Q\) follows the data input \(D\). When \(E=0\), the latch holds its current value.
5.3 Flip-Flops
A flip-flop samples its data input only at a clock edge (rising or falling) and holds the sampled value until the next active edge. This edge-triggered behavior makes flip-flops the preferred storage element in synchronous digital design.
5.3.1 D Flip-Flop
The most common flip-flop type. On the active clock edge (say, rising), the output \(Q\) takes the value of the data input \(D\), where \(D\) was stable for at least \(t_{setup}\) before the edge and remains stable for at least \(t_{hold}\) after.
5.3.2 JK Flip-Flop
The JK flip-flop eliminates the forbidden state of the SR latch. When both \(J=1\) and \(K=1\), the output toggles on each clock edge. The characteristic equation is:
\[ Q^+ = J \overline{Q} + \overline{K} Q \]5.3.3 T Flip-Flop
A special case of the JK with \(J=K=T\). When \(T=1\), the output toggles on each active clock edge; when \(T=0\), it holds. The characteristic equation is:
\[ Q^+ = T \oplus Q \]T flip-flops appear naturally in binary counter design.
5.3.4 Flip-Flop Excitation Tables
For synthesis (given the current state \(Q\) and desired next state \(Q^+\), what inputs are needed?), the excitation table is more useful than the characteristic equation. For the D flip-flop, trivially \(D = Q^+\). For the JK:
| \(Q\) | \(Q^+\) | \(J\) | \(K\) |
|---|---|---|---|
| 0 | 0 | 0 | d |
| 0 | 1 | 1 | d |
| 1 | 0 | d | 1 |
| 1 | 1 | d | 0 |
(d = don’t care)
5.4 Registers
A register is an array of \(n\) flip-flops sharing a common clock and (optionally) enable signal, storing an \(n\)-bit word. Registers are the basic storage unit at the microarchitectural level: program counters, accumulators, instruction registers, and address registers are all implemented as registers.
A shift register additionally connects the output of each flip-flop to the data input of the next, enabling serial-in/parallel-out (SIPO) or parallel-in/serial-out (PISO) conversions. An \(n\)-bit linear feedback shift register (LFSR) uses XOR feedback to cycle through \(2^n - 1\) pseudo-random states and is widely used in test pattern generation and CRC computation.
5.5 Counters
5.5.1 Asynchronous (Ripple) Counters
In an asynchronous counter, the clock of each flip-flop is driven by the output of the previous flip-flop rather than by the system clock. This means state transitions ripple through the chain, producing intermediate glitch states during the propagation. Asynchronous counters are simple but produce incorrect readings if the output is sampled before all flip-flops have settled.
5.5.2 Synchronous Counters
All flip-flops are clocked simultaneously from the system clock. The next-state logic computes the new value of each bit in parallel. A standard synchronous binary up-counter has toggle logic for each bit:
- Bit 0 always toggles: \(T_0 = 1\).
- Bit 1 toggles when bit 0 is 1: \(T_1 = Q_0\).
- Bit 2 toggles when bits 0 and 1 are both 1: \(T_2 = Q_0 Q_1\).
- In general, \(T_k = Q_0 Q_1 \cdots Q_{k-1}\).
A carry-chain optimization computes these enables incrementally to reduce the fan-in of each AND gate.
Modulo-\(m\) counters can be built for any \(m\) by resetting or loading a preset value when the count reaches \(m-1\). Up-down counters add a direction control signal.
Chapter 6: Synchronous Sequential Circuit Design
6.1 Finite State Machines
A finite state machine (FSM) is a sequential circuit model with a finite set of states, inputs, outputs, a transition function, and an output function. Two standard models are used.
6.2 State Diagram and State Table
A state diagram is a directed graph in which:
- Nodes represent states (circles, labeled with the state name and, for Moore, the output).
- Arcs represent transitions, labeled with the input (and, for Mealy, the output).
The state table is the tabular equivalent, listing current state, input combinations, next state, and output.
States: A (initial, no 1 seen), B (one 1 seen).
| Current State | Input | Next State | Output |
|---|---|---|---|
| A | 0 | A | 0 |
| A | 1 | B | 0 |
| B | 0 | A | 0 |
| B | 1 | B | 1 |
The output is 1 only in state B with input 1, meaning a second (or further) consecutive 1 has been received.
6.3 State Encoding and State Minimization
Before implementing an FSM in hardware, redundant states must be removed. Two states are equivalent if, for every possible input sequence, they produce identical output sequences. Equivalent states can be merged.
The state minimization algorithm (partitioning method):
- Initially partition states into groups by output value.
- Iteratively refine: two states in the same group are split into different groups if their transitions on some input lead to states in different groups.
- Repeat until no further refinements are possible.
6.3.1 State Assignment
Assign a binary code to each state. For \(m\) states, at least \(\lceil \log_2 m \rceil\) flip-flops are needed. Common schemes include:
- Sequential binary: simple, minimizes flip-flop count.
- One-hot: one flip-flop per state, only one is 1 at any time. Simplifies next-state logic at the cost of more flip-flops. Preferred in FPGA designs.
- Gray code: adjacent states differ by one bit; can reduce transitions and power consumption.
6.3.2 Next-State Logic Derivation
Given a state encoding, construct the excitation table for each flip-flop and minimize the next-state logic using K-maps or other methods. The output logic is minimized separately.
6.4 Analysis of Synchronous Sequential Circuits
Given a circuit netlist, derive the FSM behavior:
- Write the Boolean equations for each flip-flop’s input.
- Write the output equations.
- Construct the state table by evaluating next-state and output for all combinations of current state and input.
- Draw the state diagram.
- Identify the functional behavior from the state diagram.
Chapter 7: Asynchronous Sequential Circuits
7.1 Asynchronous vs. Synchronous Design
In an asynchronous sequential circuit (ASC), there is no global clock. State changes are triggered directly by input changes. This offers lower latency and potentially lower power (since not all elements switch at every clock edge), but the design is significantly more complex: races, hazards, and critical timing depend on physical propagation delays that vary across fabrication process, voltage, and temperature.
ASCs are encountered in interfacing circuits, arbiters, and legacy designs. Understanding them is also essential for analyzing metastability and asynchronous inputs to synchronous systems.
7.2 Fundamental Mode Assumption
Most ASC analysis assumes fundamental mode: input changes one at a time, and no new input change occurs until the circuit has reached a stable state — a state from which no further transitions occur without a new input change. Under this assumption, the circuit will always settle before the next input event.
7.3 Flow Table Analysis
The flow table is the state table for an ASC, where stable states are circled. Given the current state and input, the table gives the next state (or total state). If the next state equals the current state, it is stable.
Analysis procedure:
- Identify all feedback loops (these form the state variables).
- Assign Boolean variables to the secondary state.
- Write excitation equations for each state variable and output equations.
- Construct the transition table; circle stable states.
- Derive a primitive flow table, then reduce it.
7.4 Races and Hazards
7.4.1 Races
A race occurs when multiple state variables must change simultaneously. If the final state reached is the same regardless of the order of changes, it is a non-critical race (benign). If different change orderings lead to different stable states, it is a critical race and must be eliminated by redesigning the state assignment.
A race-free state assignment ensures that any required multi-bit transition passes through intermediate states, one bit at a time, in a defined order.
7.4.2 Static Hazards
A static-1 hazard is a momentary 0 glitch in an output that should remain 1 during a transition. It occurs in SOP implementations when switching between two minterms in different product terms — the brief moment when both terms evaluate to 0 creates the glitch.
A hazard is detected by finding two adjacent 1-cells in the K-map that are covered by different prime implicants, with no prime implicant spanning both. The remedy is to add a consensus term — a redundant product term that covers the boundary — to bridge the gap.
7.4.3 Dynamic Hazards
A dynamic hazard is a triple transition (0→1→0→1 or 1→0→1→0) in an output that should make a single transition. Dynamic hazards arise in multilevel circuits and are more complex to eliminate.
7.5 ASC Synthesis
Synthesis procedure:
- Construct a primitive flow table from the word specification.
- Merge compatible rows to reduce state count (rows are compatible if they have identical outputs in all specified input columns and non-contradictory next states).
- Assign state codes using a race-free assignment.
- Derive excitation and output equations.
- Check for hazards in the excitation equations; add consensus terms if needed.
Chapter 8: Timing Analysis
8.1 Propagation Delay
Every logic gate and interconnect has a propagation delay — the time for a change at the input to produce a stable change at the output. Two parameters characterize gate delay:
- \(t_{pHL}\): propagation delay from input transition to output falling (High-to-Low).
- \(t_{pLH}\): propagation delay from input transition to output rising (Low-to-High).
The larger of the two is often used as a single worst-case delay. In multi-level networks, the critical path delay is the sum of propagation delays along the longest path from primary input to primary output.
8.2 Setup and Hold Times
For a flip-flop clocked at edge \(t_{clk}\):
- Setup time \(t_{su}\): the data input D must be stable for at least \(t_{su}\) before the active clock edge.
- Hold time \(t_h\): D must remain stable for at least \(t_h\) after the active clock edge.
Violation of setup time causes the flip-flop to sample the wrong value. Violation of hold time can cause metastability — the flip-flop enters a meta-stable equilibrium that resolves unpredictably to 0 or 1.
8.3 Maximum Clock Frequency
In a synchronous pipeline stage with combinational logic between two registers:
\[ T_{clk,min} = t_{clk-to-Q} + t_{comb} + t_{su} \]where \(t_{clk-to-Q}\) is the flip-flop clock-to-output delay and \(t_{comb}\) is the worst-case combinational path delay. The maximum operating frequency is \(f_{max} = 1 / T_{clk,min}\).
The hold time constraint is:
\[ t_{clk-to-Q,min} + t_{comb,min} > t_h \]where min refers to the shortest (best-case) path. If this constraint is violated, a buffer must be inserted on the short path to increase its delay.
8.4 Timing Diagrams
A timing diagram displays signal waveforms as a function of time. For synchronous circuits:
- Clock is shown first, with regular rising/falling edges.
- Input signals show transitions relative to the clock edge.
- Register outputs show changes at the clock edge plus the propagation delay.
- Combinational outputs show glitches (hazards) as narrow pulses during transitions.
Fan-out — the number of gate inputs driven by a single output — increases both electrical loading (and thus propagation delay) and power consumption. Exceeding a gate’s rated fan-out causes the output voltage to deviate beyond valid logic levels.
Chapter 9: Hardware Description Languages and Implementation Technologies
9.1 The Role of HDLs
Hardware description languages (HDLs) allow designers to describe digital circuits at multiple levels of abstraction — behavioral, register-transfer level (RTL), and structural — and to simulate and synthesize designs using electronic design automation (EDA) tools. The two dominant HDLs in industry are VHDL and Verilog. ECE 124 uses Verilog.
9.2 Verilog Fundamentals
A Verilog design is organized into modules. Each module declares its ports (inputs and outputs) and describes internal logic using continuous assignments (for combinational logic), always blocks (for sequential or conditional logic), and instantiation of sub-modules.
9.2.1 Combinational Logic in Verilog
Continuous assignment with assign implements combinational logic:
module mux2to1 (
input wire sel, a, b,
output wire y
);
assign y = sel ? b : a;
endmodule
An always @(*) block with only blocking assignments (=) also synthesizes to combinational logic, provided all outputs are assigned in every branch:
module decoder2to4 (
input wire [1:0] in,
output reg [3:0] out
);
always @(*) begin
case (in)
2'b00: out = 4'b0001;
2'b01: out = 4'b0010;
2'b10: out = 4'b0100;
2'b11: out = 4'b1000;
endcase
end
endmodule
9.2.2 Sequential Logic in Verilog
Edge-sensitive behavior (flip-flops) is described with always @(posedge clk) using non-blocking assignments (<=). Non-blocking assignments evaluate the right-hand side using the current values of all signals, then update the left-hand side simultaneously at the end of the time step — mimicking the behavior of a bank of flip-flops clocked together.
module dff (
input wire clk, rst_n, d,
output reg q
);
always @(posedge clk or negedge rst_n) begin
if (!rst_n)
q <= 1'b0;
else
q <= d;
end
endmodule
9.2.3 FSM in Verilog
A standard Verilog FSM template uses two always blocks: one for the state register (sequential), one for the next-state and output logic (combinational).
module seq_detector (
input wire clk, rst_n, x,
output reg z
);
typedef enum logic {A, B} state_t;
state_t state, next_state;
// State register
always @(posedge clk or negedge rst_n)
if (!rst_n) state <= A;
else state <= next_state;
// Next-state and output logic
always @(*) begin
case (state)
A: begin
next_state = x ? B : A;
z = 1'b0;
end
B: begin
next_state = B;
z = x ? 1'b1 : 1'b0;
end
endcase
end
endmodule
9.3 Implementation Technologies
9.3.1 CMOS Logic
Complementary Metal-Oxide-Semiconductor (CMOS) technology implements each logic gate with a pull-up network (PMOS transistors, connecting output to VDD) and a pull-down network (NMOS transistors, connecting output to GND). The two networks are dual — PMOS gates form parallel where NMOS gates form series, and vice versa — ensuring exactly one network conducts at a time in any static state, giving near-zero static power dissipation.
Dynamic power is \(P = \alpha C_L V_{DD}^2 f\), where \(\alpha\) is the activity factor (fraction of clock cycles with a transition), \(C_L\) is the load capacitance, \(V_{DD}\) is the supply voltage, and \(f\) is the clock frequency. Reducing \(V_{DD}\) is the most powerful lever for power reduction, with a quadratic benefit.
9.3.2 Standard Cell Libraries
Industrial digital design uses pre-characterized cell libraries: a catalog of logic cells (inverters, NAND, NOR, flip-flops, MUXes) whose area, delay, and power have been characterized across process corners (fast/slow transistors), supply voltage, and temperature (PVT corners). Synthesis tools automatically select and connect cells from the library to implement an HDL description.
9.3.3 FPGAs
A Field-Programmable Gate Array (FPGA) is a device consisting of an array of programmable logic blocks (PLBs), interconnect, and I/O cells that can be configured after manufacture to implement any digital circuit. Each PLB contains a look-up table (LUT) — typically 4- or 6-input — that can implement any Boolean function of its inputs, plus one or more flip-flops.
ECE 124 labs use FPGAs (Quartus Prime EDA tool, Intel/Altera family) to implement and verify combinational and sequential designs. FPGAs are used in prototyping, low-volume production, and applications requiring field reconfiguration.
Key FPGA characteristics:
- LUT-based logic: any \(k\)-input function in a single LUT.
- Block RAM (BRAM): dedicated on-chip memory blocks.
- DSP blocks: dedicated multiplier-accumulator units.
- Higher propagation delay than ASICs due to programmable routing.
- Reconfigurable: the same silicon can be reprogrammed for different designs.
Chapter 10: Bringing It Together — Design Methodology
10.1 Synchronous Design Discipline
Professional digital designers adhere to the synchronous design discipline: all state is stored in flip-flops clocked by a single global clock (or carefully managed clock domains). Combinational logic between registers is required to settle within one clock period. This discipline ensures predictable, analyzable behavior and makes formal timing analysis tractable.
The three commandments of synchronous design are:
- Every feedback loop passes through at least one flip-flop.
- Combinational paths from one register to the next satisfy the setup constraint.
- No path is so short as to violate the hold constraint.
Violating any of these rules leads to functional failures that are notoriously difficult to debug because they are timing-dependent and may be sensitive to temperature, supply voltage, and process variation.
10.2 Design Hierarchy and Modularity
Complex digital systems are designed hierarchically. A processor is decomposed into a datapath and a control unit; the datapath is decomposed into an ALU, registers, and buses; the ALU is decomposed into adders, comparators, and shifters; and so on down to primitive gates. This hierarchy:
- Manages complexity by limiting the designer’s cognitive load at each level.
- Enables reuse: a well-tested adder module can be instantiated many times.
- Enables parallel development by independent teams.
- Maps naturally to HDL module instantiation.
10.3 Verification
Simulation is the primary verification method at the logic level. A testbench is an HDL file (not synthesizable) that instantiates the design under test (DUT), drives input stimuli, and checks output responses against expected values. Verilog’s system tasks (monitor, display) and assertion statements report failures.
A timing diagram derived from simulation shows whether the design correctly implements the intended state machine. After synthesis, a gate-level netlist simulation with back-annotated delays (from place-and-route) verifies that timing constraints are met under realistic conditions.
10.4 Worked Design Example — Vending Machine Controller
A vending machine accepts nickels (5¢) and dimes (10¢) and dispenses an item costing 15¢. Inputs: N (nickel inserted), D (dime inserted). Output: Z (dispense item).
State encoding: S0 = 0¢, S1 = 5¢, S2 = 10¢, S3 = 15¢ (or more). Since S3 triggers dispense and resets, only three states need to be stored.
State diagram (Moore machine):
- S0: Z=0. N→S1, D→S2.
- S1: Z=0. N→S2, D→S3.
- S2: Z=0. N→S3, D→S3.
- S3: Z=1. (automatically return to S0 after dispense).
State encoding with 2 bits (Q1 Q0): S0=00, S1=01, S2=10, S3=11.
Next-state logic (using D flip-flops, inputs N and D, current state Q1Q0):
From the state table:
| Q1 | Q0 | N | D | D1 | D0 |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 0 | 0 | 1 |
| 0 | 0 | 0 | 1 | 1 | 0 |
| 0 | 1 | 0 | 0 | 0 | 1 |
| 0 | 1 | 1 | 0 | 1 | 0 |
| 0 | 1 | 0 | 1 | 1 | 1 |
| 1 | 0 | 0 | 0 | 1 | 0 |
| 1 | 0 | 1 | 0 | 1 | 1 |
| 1 | 0 | 0 | 1 | 1 | 1 |
| 1 | 1 | x | x | 0 | 0 |
(When in S3, Z=1 and next state is S0 regardless of inputs; inputs N and D are don’t cares.)
K-map minimization yields:
\[ D_1 = Q_1 \overline{Q_0} + Q_0 D + Q_0 N + Q_1 \overline{N}\,\overline{D} \]\[ D_0 = \overline{Q_1} N + Q_0 \overline{N}\,\overline{D} + Q_0 D \]Wait — for S3 → S0, in S3 (\(Q_1=1, Q_0=1\)), the next state is 00 regardless of input, so D1=D0=0 when Q1Q0=11. This simplification makes the output logic \(Z = Q_1 Q_0\).
This example integrates all the major topics of the course: Boolean algebra, K-map minimization, state machine design, state encoding, next-state logic derivation, and HDL implementation.
Chapter 11: Summary and Connections
11.1 The Digital Design Flow
The complete flow from specification to working hardware follows these steps:
- Specification: Define inputs, outputs, and desired behavior (truth table, state machine, timing diagram).
- Algorithmic / RTL design: Partition the system into datapath and control; design FSMs for control.
- Logic synthesis: Minimize Boolean functions; select gate implementations.
- Technology mapping: Map logic to a target library (standard cells) or FPGA LUTs.
- Placement and routing: Physically assign cells to locations on the chip and route interconnects.
- Timing analysis: Verify that all setup and hold constraints are met across all PVT corners.
- Simulation and verification: Functional and timing simulation to catch errors before fabrication or programming.
- Testing: Apply test vectors to the manufactured chip; use scan chains and built-in self-test (BIST) for testability.
11.2 Key Formulas and Rules
| Concept | Formula / Rule |
|---|---|
| De Morgan’s (AND) | \(\overline{x \cdot y} = \overline{x} + \overline{y}\) |
| De Morgan’s (OR) | \(\overline{x + y} = \overline{x} \cdot \overline{y}\) |
| Shannon expansion | \(f = \overline{x} f_0 + x f_1\) |
| Full adder sum | \(S = A \oplus B \oplus C_{in}\) |
| Full adder carry | \(C_{out} = AB + (A \oplus B) C_{in}\) |
| Two’s complement negation | \(-x = \overline{x} + 1\) |
| D FF setup constraint | \(T_{clk} \ge t_{clk-Q} + t_{comb} + t_{su}\) |
| Dynamic power (CMOS) | \(P = \alpha C_L V_{DD}^2 f\) |
| T FF characteristic | \(Q^+ = T \oplus Q\) |
| JK FF characteristic | \(Q^+ = J\overline{Q} + \overline{K}Q\) |
11.3 Connections to Later Courses
The foundations built in ECE 124 directly support:
- ECE 222 / ECE 224 (Digital Computers): The datapath of a processor — ALUs, register files, program counters, instruction decoders — is a direct extension of the combinational and sequential building blocks developed here.
- ECE 327 (Digital Hardware Systems): RTL design, pipelining, memory systems, and HDL-based ASIC flows.
- ECE 428 / 429 (VLSI): Transistor-level implementation of logic gates in CMOS, layout, and power analysis.
- ECE 406 (Algorithm Design and Analysis applied to hardware): Arithmetic circuits, number-theoretic transforms, hardware for cryptography.
- Software Engineering courses: Even software engineers benefit from understanding how the hardware they program actually works — from clock cycles to cache timing to memory-mapped I/O.