ECE 380: Analog Control Systems

Michael Fisher

Estimated study time: 58 minutes

Table of contents

Sources and References

Primary references — G.F. Franklin, J.D. Powell & A. Emami-Naeini, Feedback Control of Dynamic Systems, 8th ed., Pearson, 2019; R.C. Dorf & R.H. Bishop, Modern Control Systems, 14th ed., Pearson. Supplementary — K.J. Åström & R.M. Murray, Feedback Systems: An Introduction for Scientists and Engineers (open access at fbsbook.org); L. Qiu & K. Zhou, Introduction to Feedback Control. Online resources — MIT OCW 6.302 Feedback Systems; Python Control Systems Library (python-control.readthedocs.io).


Chapter 1: Feedback and the Control Problem

1.1 Why Feedback Matters

A fundamental tension runs through all of engineering: we build systems whose behaviour we want to prescribe, yet the physical world continuously conspires against us through manufacturing tolerances, aging components, environmental disturbances, and the inherent limits of any mathematical model. Feedback control is the discipline that resolves this tension. By measuring what a system is actually doing and continuously correcting the difference between desired and actual behaviour, a well-designed feedback controller can make a sluggish, poorly characterised, or even open-loop unstable plant perform to specification — reliably, repeatedly, and robustly.

The importance of feedback in electrical engineering in particular can hardly be overstated. Harold Black’s 1927 invention of the negative-feedback amplifier is among the pivotal moments in the history of electrical engineering. Before feedback, telephone repeater amplifiers had gains that drifted with temperature and tube aging; feedback reduced gain sensitivity by a factor equal to one plus the loop gain, making intercontinental telephony feasible. The same principle — using a measured signal to correct an ongoing process — now governs everything from op-amp circuits and power converter duty cycles to motor drives and the attitude control of satellites.

1.1.1 Open-Loop Versus Closed-Loop Architectures

In an open-loop system the controller produces its output purely on the basis of the reference command, with no knowledge of what the plant is actually doing. A microwave oven running for a preset time regardless of the food’s internal temperature is a familiar example. Open-loop schemes are simple and inexpensive but break down whenever the plant behaviour deviates from the model used at design time: component drift, load changes, or external disturbances all produce uncompensated output errors.

In a closed-loop (feedback) system a sensor measures the plant output \( y(t) \) and the controller forms the error signal

\[ e(t) = r(t) - y(t), \]

where \( r(t) \) is the reference (setpoint). The controller \( C \) processes \( e(t) \) to produce the actuating signal \( u(t) \), which drives the plant \( P \) so as to reduce \( e \). The closed-loop transfer function (from reference \( R \) to output \( Y \) in the Laplace domain) for a unity-feedback architecture is

\[ T(s) = \frac{C(s)\,G(s)}{1 + C(s)\,G(s)}, \]

where \( G(s) \) is the plant transfer function. The denominator \( 1 + C(s)G(s) \) is the characteristic polynomial of the closed-loop system; its roots (the closed-loop poles) determine stability and transient performance.

1.1.2 Consequences of Feedback

Feedback buys four key engineering benefits, each of which has a price:

  1. Disturbance rejection. The closed-loop transfer function from disturbance \( D \) to output is \( G/(1 + CG) \), which is suppressed by the factor \( 1 + CG \) relative to the open-loop disturbance response. Large loop gain means better rejection, but loop gain cannot be made arbitrarily large without risking instability.

  2. Reduced sensitivity to plant uncertainty. The sensitivity function \( S(s) = 1/(1+CG) \) quantifies how a fractional change in \( G \) maps to a fractional change in \( T \). Specifically, \( \delta T / T = S \cdot \delta G / G \), so high loop gain (\( |CG| \gg 1 \)) makes \( |S| \approx 0 \) and the closed-loop gain is insensitive to plant variations.

  3. Bandwidth extension. A transistor amplifier with open-loop bandwidth \( \omega_{OL} \) and gain \( A_0 \) can be embedded in a feedback loop to produce closed-loop bandwidth \( \omega_{CL} \approx (1 + A_0 \beta)\,\omega_{OL} \) at the cost of reduced gain \( A_{CL} \approx A_0/(1 + A_0 \beta) \). This gain-bandwidth trade-off is the fundamental operating principle of every broadband op-amp circuit.

  4. Linearisation of nonlinear plants. Feedback suppresses the effect of plant nonlinearities within the bandwidth of the loop, enabling us to use linear analysis and design methods even for mildly nonlinear physical systems — provided the loop gain is sufficiently high at the frequencies where nonlinearity matters.

ECE perspective. In ECE applications, feedback control appears at every scale: a phase-locked loop (PLL) locks a voltage-controlled oscillator (VCO) to a reference clock using feedback through a phase detector and loop filter; a linear regulator maintains output voltage by comparing it to an internal reference and adjusting a pass transistor; a motor drive closes a current loop, then a speed loop, then a position loop — three nested feedback loops operating at different bandwidths. The theory developed in this course applies uniformly to all these examples.

1.2 Course Road Map

ECE 380 develops the “classical” framework for feedback control design: transfer functions, block diagrams, and frequency-domain techniques. The journey proceeds as follows. Chapters 2–3 build the modelling toolkit — Laplace transforms, transfer functions, and block diagram algebra — applied to mechanical, electrical, and electromechanical systems. Chapter 4 characterises the time-domain behaviour of first- and second-order systems and connects pole locations to transient specifications. Chapter 5 addresses stability through the Routh-Hurwitz algebraic criterion. Chapters 6–7 introduce the root locus method for visualising how closed-loop poles move as a design parameter varies, and use it for PID and lead-lag compensator design. Chapters 8–9 shift to the frequency domain, covering Bode plots, the Nyquist stability criterion, and stability margins. Chapter 10 brings the design cycle to completion with frequency-domain compensator synthesis — loop shaping using lead, lag, and lead-lag networks.


Chapter 2: Mathematical Modelling of Physical Systems

2.1 The Laplace Transform

Almost all of classical control is carried out in the Laplace domain. The Laplace transform converts an ordinary differential equation (ODE) into an algebraic equation in the complex frequency variable \( s \), enabling systematic manipulation before inversion back to the time domain.

One-Sided Laplace Transform. For a causal signal \( f(t) \), \( t \geq 0 \), the Laplace transform is \[ F(s) = \mathcal{L}\{f(t)\} = \int_0^{\infty} f(t)\,e^{-st}\,dt, \]

where \( s = \sigma + j\omega \in \mathbb{C} \) and the integral converges for \( \text{Re}(s) \) larger than the abscissa of absolute convergence.

The indispensable transform pairs are collected in the table below.

Signal \( f(t) \)Transform \( F(s) \)
\( \delta(t) \) (unit impulse)\( 1 \)
\( u_s(t) \) (unit step)\( 1/s \)
\( t\,u_s(t) \) (unit ramp)\( 1/s^2 \)
\( e^{-at} \)\( 1/(s+a) \)
\( t\,e^{-at} \)\( 1/(s+a)^2 \)
\( \sin(\omega_n t) \)\( \omega_n/(s^2+\omega_n^2) \)
\( \cos(\omega_n t) \)\( s/(s^2+\omega_n^2) \)
\( e^{-\sigma t}\sin(\omega_d t) \)\( \omega_d/[(s+\sigma)^2+\omega_d^2] \)

The operational property most used in control is the differentiation rule: \( \mathcal{L}\{\dot{f}(t)\} = s\,F(s) - f(0^-) \). With zero initial conditions, differentiation in the time domain corresponds to multiplication by \( s \) in the Laplace domain, turning an \( n \)th-order ODE into an algebraic equation of degree \( n \).

2.1.1 Initial and Final Value Theorems

Initial Value Theorem. \( \lim_{t \to 0^+} f(t) = \lim_{s \to \infty} s\,F(s) \), provided the limit exists.

Final Value Theorem. \( \lim_{t \to \infty} f(t) = \lim_{s \to 0} s\,F(s) \), provided all poles of \( s\,F(s) \) lie in the open left half-plane (OLHP).

The final value theorem is the control engineer’s shortcut to steady-state error without having to invert the Laplace transform: simply evaluate \( \lim_{s \to 0} s\,E(s) \) where \( E(s) \) is the error transform.

2.2 Transfer Functions

Transfer Function. For a linear, time-invariant (LTI) system with zero initial conditions, the transfer function \( G(s) \) is \[ G(s) = \frac{Y(s)}{U(s)} = \frac{b_m s^m + b_{m-1}s^{m-1} + \cdots + b_0}{s^n + a_{n-1}s^{n-1} + \cdots + a_0}, \]

where the integer \( n \geq m \) (the system is proper) or \( n > m \) (strictly proper). The roots of the numerator are the zeros; the roots of the denominator are the poles.

A strictly proper transfer function models every physical plant: no physical system can respond instantaneously, so the output cannot have components of higher derivative order than the input, which forces \( n > m \). Controllers, by contrast, are sometimes improper in theory (pure derivative action) but must be rendered proper in practice.

2.3 Modelling Mechanical Systems

Newton’s second law in translational and rotational form yields the equations of motion for lumped mechanical systems.

Mass-spring-damper (translational). A mass \( m \) connected to ground by a spring of stiffness \( k \) and a viscous damper with coefficient \( b \), driven by an applied force \( u(t) \):

\[ m\,\ddot{x} + b\,\dot{x} + k\,x = u(t). \]

Taking the Laplace transform with zero initial conditions and defining \( X(s) \) as the displacement:

\[ G(s) = \frac{X(s)}{U(s)} = \frac{1}{ms^2 + bs + k}. \]

Rotational systems. A rigid body of moment of inertia \( J \), acted on by a torque input \( \tau(t) \) and resisted by a viscous friction torque \( b\,\dot{\theta} \) and a torsional spring \( k\,\theta \):

\[ J\,\ddot{\theta} + b\,\dot{\theta} + k\,\theta = \tau(t), \qquad G(s) = \frac{\Theta(s)}{\mathcal{T}(s)} = \frac{1}{Js^2 + bs + k}. \]

For a free rotor (no torsional spring, \( k = 0 \)), \( G(s) = 1/(Js^2 + bs) = 1/[s(Js+b)] \), which contains a pure integrator. This integrator is physically significant: position is the integral of angular velocity, so applying a torque to a free disc eventually produces unbounded rotation unless feedback corrects it.

2.4 Modelling Electrical Systems

Kirchhoff’s voltage and current laws, combined with the constitutive relations for resistors, capacitors, and inductors, produce the plant models that are native to ECE 380.

2.4.1 Series RLC Circuit

For a series RLC circuit with input voltage \( u(t) \) and output taken as the capacitor voltage \( y(t) = v_C \), the KVL equation is

\[ L\,\frac{di}{dt} + R\,i + \frac{1}{C}\int i\,dt = u(t), \qquad i = C\,\dot{y}. \]

Differentiating the integral equation:

\[ LC\,\ddot{y} + RC\,\dot{y} + y = u(t), \qquad G(s) = \frac{Y(s)}{U(s)} = \frac{1}{LCs^2 + RCs + 1}. \]

This is structurally identical to the mechanical mass-spring-damper with \( L \leftrightarrow m \), \( R \leftrightarrow b \), and \( 1/C \leftrightarrow k \). The universality of the second-order transfer function across physical domains is one of the deepest and most useful facts in engineering science.

2.4.2 Op-Amp Integrator and Differentiator

Operational amplifiers implement the fundamental control operations of integration and differentiation in analog hardware. Because \( V_{-} \approx V_{+} \) and \( i_{in} \approx 0 \) for an ideal op-amp:

Inverting integrator (capacitor in feedback path, resistor at input):

\[ V_{out}(s) = -\frac{1}{RCs}\,V_{in}(s), \qquad G(s) = -\frac{1}{RCs}. \]

Inverting differentiator (resistor in feedback path, capacitor at input):

\[ V_{out}(s) = -RCs\,V_{in}(s), \qquad G(s) = -RCs. \]

The differentiator amplifies high-frequency noise and is avoided in practice; practical derivative action uses a pole-augmented version \( -RCs/(1 + \tau_f s) \) with \( \tau_f \ll RC \).

2.4.3 DC Motor Model

A permanent-magnet DC motor is the canonical electromechanical plant. The electrical subsystem is governed by

\[ L_a\,\frac{di_a}{dt} + R_a\,i_a = u(t) - K_b\,\dot{\theta}, \]

where \( u \) is the armature voltage, \( K_b \) is the back-EMF constant, and \( \dot{\theta} \) is the shaft speed. The mechanical subsystem satisfies

\[ J\,\ddot{\theta} + b\,\dot{\theta} = K_t\,i_a, \]

where \( K_t \) is the motor torque constant. For SI units, energy conservation requires \( K_t = K_b \). Eliminating \( i_a \) and taking the Laplace transform yields the transfer function from armature voltage to shaft angle:

\[ G(s) = \frac{\Theta(s)}{U(s)} = \frac{K_t}{s\left[(L_a s + R_a)(Js + b) + K_t K_b\right]}. \]

In most practical motors the electrical time constant \( \tau_e = L_a/R_a \) is much smaller than the mechanical time constant \( \tau_m = J/b \), so setting \( L_a \approx 0 \) gives the simplified model

\[ G(s) = \frac{K_t / (R_a\,J)}{s\!\left(s + \frac{R_a b + K_t K_b}{R_a J}\right)} = \frac{K}{s(s + a)}, \]

a second-order plant with a pole at the origin (angle is the integral of speed) and a single stable pole at \( s = -a \).

Example 2.1: DC Motor Transfer Function.

A DC motor has \( R_a = 1\,\Omega \), \( L_a \approx 0 \), \( K_t = K_b = 0.1\,\text{V·s/rad} \), \( J = 0.01\,\text{kg·m}^2 \), \( b = 0.001\,\text{N·m·s/rad} \). Find the transfer function from armature voltage to shaft speed \( \Omega(s) \).

Solution. Using the simplified model (\( L_a = 0 \)):

\[ G_\Omega(s) = \frac{\Omega(s)}{U(s)} = \frac{K_t/R_a}{Js + b + K_t K_b/R_a} = \frac{0.1}{0.01s + 0.001 + 0.01} = \frac{0.1}{0.01s + 0.011} = \frac{10}{s + 1.1}. \]

The motor time constant is \( \tau = J/(b + K_t K_b/R_a) = 0.01/0.011 \approx 0.909\,\text{s} \) and the DC gain is \( K_{DC} = 10/1.1 \approx 9.09\,\text{rad/s per volt} \).

2.5 Linearisation of Nonlinear Systems

Physical systems are rarely linear over their full operating range. The standard engineering approach is to linearise around a chosen operating point (equilibrium).

Let the nonlinear state equation be \( \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, u) \), with equilibrium \( (\mathbf{x}_0, u_0) \) satisfying \( \mathbf{f}(\mathbf{x}_0, u_0) = \mathbf{0} \). Defining perturbations \( \delta\mathbf{x} = \mathbf{x} - \mathbf{x}_0 \) and \( \delta u = u - u_0 \), the first-order Taylor expansion gives the linearised model:

\[ \delta\dot{\mathbf{x}} = A\,\delta\mathbf{x} + B\,\delta u, \qquad A = \left.\frac{\partial \mathbf{f}}{\partial \mathbf{x}}\right|_{\mathbf{x}_0,u_0}, \quad B = \left.\frac{\partial \mathbf{f}}{\partial u}\right|_{\mathbf{x}_0,u_0}. \]

This linearised system is exactly LTI and can be analysed using all the tools of transfer function theory — within the neighbourhood of the operating point where the linear approximation is valid.


Chapter 3: Block Diagrams and Signal Flow Graphs

3.1 Block Diagram Algebra

A block diagram is a graphical language for expressing the signal relationships in a control system. Each block represents an operation (typically a transfer function), and signals flow along directed branches between blocks. Three elementary operations cover all cases:

  • Cascade (series) connection: \( Y = G_2(G_1 U) \Rightarrow G_{eq} = G_1 G_2 \).
  • Parallel connection: \( Y = (G_1 + G_2)U \Rightarrow G_{eq} = G_1 + G_2 \).
  • Feedback loop: With forward path \( G \) and feedback path \( H \), the closed-loop transfer function is \( G_{eq} = G/(1 + GH) \).

A sequence of block diagram reduction rules — moving summing junctions, moving branch points, and applying the three elementary equivalences — allows any block diagram to be reduced to a single transfer function. However, for complex multi-loop diagrams with many cross-connections, systematic reduction becomes tedious. Signal flow graphs offer a more efficient alternative.

3.2 Signal Flow Graphs and Mason’s Rule

A signal flow graph (SFG) is a directed graph in which each node represents a signal and each directed edge (branch) carries a gain equal to the transfer function between the signals at its two endpoints. Every block diagram has an equivalent SFG, constructed by mapping each summing junction to a node (with incoming branches from all inputs) and each transfer function block to a directed branch.

SFG Terminology.
  • Source node: a node with no incoming branches (input).
  • Sink node: a node with no outgoing branches (output).
  • Path: a sequence of connected branches traversed in the direction of their arrows, visiting no node more than once.
  • Forward path: a path from source to sink.
  • Loop: a closed path returning to its starting node, visiting no other node more than once.
  • Non-touching loops: two loops that share no nodes.
Mason's Gain Rule. The transfer function from source node \( r \) to sink node \( y \) is \[ T = \frac{y}{r} = \frac{1}{\Delta}\sum_k P_k\,\Delta_k, \]

where:

  • \( \Delta = 1 - \sum_i L_i + \sum_{igraph determinant (sum over all loops \( L_i \), all pairs of non-touching loops, all triples, etc.).
  • \( P_k \) is the gain of the \( k \)th forward path.
  • \( \Delta_k \) is the cofactor of the \( k \)th forward path — the graph determinant computed after removing all nodes and branches that touch path \( k \).
Example 3.1: Mason's Rule for a Two-Loop System.

Consider a forward path gain \( G_1 G_2 G_3 \), a minor feedback loop of gain \( -G_2 H_1 \) around \( G_2 \), and a major feedback loop of gain \( -G_1 G_2 G_3 H_2 \) around the entire forward path.

Forward paths: One path, \( P_1 = G_1 G_2 G_3 \).

Loops: \( L_1 = -G_2 H_1 \), \( L_2 = -G_1 G_2 G_3 H_2 \). These loops share nodes, so they are touching and there are no non-touching pairs.

\[ \Delta = 1 - (L_1 + L_2) = 1 + G_2 H_1 + G_1 G_2 G_3 H_2. \]

Path \( P_1 \) touches both loops, so \( \Delta_1 = 1 \).

\[ T = \frac{G_1 G_2 G_3}{1 + G_2 H_1 + G_1 G_2 G_3 H_2}. \]

Mason’s rule shines for systems with many parallel forward paths or non-touching loops, where block diagram reduction requires many steps. For the standard unity-feedback loop it simply recovers \( T = G/(1+G) \) — identical to block diagram reduction, but in one systematic formula.


Chapter 4: Time-Domain Response

4.1 First-Order Systems

The first-order transfer function

\[ G(s) = \frac{K}{\tau s + 1} \]

describes an enormous variety of physical subsystems: a thermal mass exchanging heat with its environment, an RC low-pass filter, a tank with a drain, and — as shown in Section 2.4.3 — a DC motor speed response when armature inductance is neglected.

The step response (unit step input, zero initial conditions) is

\[ y(t) = K\left(1 - e^{-t/\tau}\right), \quad t \geq 0. \]

At \( t = \tau \) the output has reached \( 63.2\% \) of its final value \( K \); at \( t = 4\tau \) it is within \( 1.8\% \), and the system is considered to have “settled.” The time constant \( \tau \) therefore sets the response speed, while \( K \) is the DC gain.

Experimental identification. The time constant of a first-order plant can be read directly from the step response: draw a tangent to the step response at \( t = 0 \); it intersects the steady-state level at \( t = \tau \). Alternatively, the half-time \( t_{50} = \tau \ln 2 \approx 0.693\,\tau \) provides another graphical estimate. Both techniques are useful in the lab when fitting a model to measured data.

4.2 Second-Order Systems

The standard second-order transfer function is

\[ G(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}, \]

where \( \omega_n > 0 \) is the natural frequency and \( \zeta \geq 0 \) is the damping ratio. The poles are

\[ s_{1,2} = -\zeta\omega_n \pm \omega_n\sqrt{\zeta^2 - 1}. \]

Four qualitatively distinct cases arise:

  • Overdamped (\( \zeta > 1 \)): two distinct real poles on the negative real axis; the step response approaches steady state monotonically with no overshoot, dominated by the slower pole.
  • Critically damped (\( \zeta = 1 \)): a repeated real pole at \( s = -\omega_n \); the fastest monotone response for the given \( \omega_n \).
  • Underdamped (\( 0 < \zeta < 1 \)): a complex conjugate pair at \( s = -\sigma \pm j\omega_d \) where \( \sigma = \zeta\omega_n \) and \( \omega_d = \omega_n\sqrt{1-\zeta^2} \); the step response exhibits decaying oscillation.
  • Undamped (\( \zeta = 0 \)): poles on the imaginary axis; the step response oscillates indefinitely with frequency \( \omega_n \).

The underdamped step response is the most important for control design. With zero initial conditions and unit step input:

\[ y(t) = 1 - \frac{e^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\,\sin\!\left(\omega_d t + \phi\right), \quad \phi = \arccos\zeta, \quad t \geq 0. \]

4.2.1 Transient Performance Specifications

The following metrics characterise the underdamped step response and are the language in which control specifications are written.

Standard Transient Specifications.
  • Rise time \( t_r \): time for the output to rise from 10% to 90% of its final value. Approximation: \( t_r \approx (1.8)/\omega_n \) for \( 0.3 \leq \zeta \leq 0.8 \).
  • Peak time \( t_p \): time to reach the first (maximum) overshoot. Exactly \( t_p = \pi/\omega_d \).
  • Percent overshoot \( \%OS \): \( \%OS = 100\,e^{-\pi\zeta/\sqrt{1-\zeta^2}} \). Depends only on \( \zeta \).
  • Settling time \( t_s \): time for the response to remain within a band of \( \pm 2\% \) (or \( \pm 5\% \)) of the final value. Approximation: \( t_s \approx 4/(\zeta\omega_n) \) for the 2% criterion.

These four specifications translate directly into constraints on the acceptable closed-loop pole locations:

  • A requirement on \( \%OS \) sets a minimum damping ratio \( \zeta_{min} \), which in the s-plane corresponds to poles lying to the left of radial lines making angle \( \pm\arccos(\zeta_{min}) \) with the negative real axis.
  • A requirement on settling time sets a minimum \( \sigma_{min} = \zeta\omega_n \), i.e., the poles must lie to the left of the vertical line \( \text{Re}(s) = -\sigma_{min} \).
  • A requirement on rise time or bandwidth sets a minimum \( \omega_n \), i.e., the poles must lie outside a circle of radius \( \omega_{n,min} \) centred at the origin.

The intersection of these regions defines the design region in the s-plane, a concept central to root locus controller design.

Example 4.1: Translating Specs to Pole Locations.

Specifications: \( \%OS \leq 16\% \), \( t_s \leq 2\,\text{s} \) (2% criterion).

From %OS: \( 16 = 100\,e^{-\pi\zeta/\sqrt{1-\zeta^2}} \Rightarrow \zeta \geq 0.504 \). Poles must lie to the left of radial lines at \( \pm 59.7°\) from the negative real axis.

From \( t_s \): \( \sigma = \zeta\omega_n \geq 4/t_s = 2\,\text{s}^{-1} \). Poles must lie to the left of \( \text{Re}(s) = -2 \).

The design region is the wedge to the left of \( \text{Re}(s) = -2 \) within the angular lines. A suitable pole pair might be \( s_{1,2} = -2.5 \pm j3 \), giving \( \zeta = 0.64 \), \( \omega_n = 3.9\,\text{rad/s} \), \( \%OS \approx 8\% \), and \( t_s \approx 1.6\,\text{s} \).

4.2.2 Effect of Additional Poles and Zeros

Real plants have higher-order dynamics beyond the dominant second-order pair. The effect of additional poles and zeros is crucial to understand.

Additional poles further to the left of the dominant pair (at least five times further) are negligible: they decay so quickly that the dominant poles govern the response. As the additional pole moves toward the dominant pair, it slows the response and reduces the effective bandwidth.

Non-minimum-phase zeros in the right half-plane cause the step response to initially move in the wrong direction (undershoot). This behaviour appears in systems like an elevator or a flexible beam and places fundamental limits on achievable bandwidth.

Zeros in the left half-plane near the dominant poles tend to speed up the response (decrease rise time) and increase overshoot if the zero is in the passband.


Chapter 5: Stability and the Routh-Hurwitz Criterion

5.1 Concepts of Stability

BIBO Stability. A system is bounded-input bounded-output (BIBO) stable if every bounded input produces a bounded output. For an LTI system with transfer function \( G(s) \), BIBO stability is equivalent to the requirement that all poles of \( G(s) \) lie strictly in the open left half-plane (OLHP): \( \text{Re}(p_i) < 0 \) for all poles \( p_i \).
Asymptotic (Internal) Stability. A system described by state equation \( \dot{\mathbf{x}} = A\mathbf{x} \) is asymptotically stable if and only if all eigenvalues of \( A \) have strictly negative real parts. For the LTI input-output case (with no pole-zero cancellations), asymptotic stability and BIBO stability coincide.

A pole at the origin (\( s = 0 \)) or on the imaginary axis (\( s = j\omega_0 \)) makes the system marginally stable: a bounded input at the resonant frequency produces an unbounded output. A pole in the ORHP causes exponential growth. Either situation is unacceptable in a practical control system.

5.2 The Routh-Hurwitz Stability Criterion

Given the closed-loop characteristic polynomial

\[ a(s) = s^n + a_{n-1}s^{n-1} + \cdots + a_1 s + a_0, \]

a necessary condition for all roots to lie in the OLHP is that all coefficients \( a_i > 0 \). This necessary condition fails, for example, whenever a positive root exists (which forces a sign change in the polynomial). However, it is not sufficient: a polynomial can have all positive coefficients yet still have roots in the ORHP.

The Routh-Hurwitz criterion provides the necessary and sufficient condition, computed without finding the roots.

Routh-Hurwitz Criterion. Arrange the coefficients of \( a(s) \) into the Routh array: \[ \begin{array}{c|ccc} s^n & a_n & a_{n-2} & a_{n-4} & \cdots \\ s^{n-1} & a_{n-1} & a_{n-3} & a_{n-5} & \cdots \\ s^{n-2} & b_1 & b_2 & b_3 & \cdots \\ \vdots & \vdots & & & \\ s^0 & c_1 & & & \end{array} \]

where each element is computed from the two rows immediately above it:

\[ b_1 = \frac{a_{n-1}\,a_{n-2} - a_n\,a_{n-3}}{a_{n-1}}, \quad b_2 = \frac{a_{n-1}\,a_{n-4} - a_n\,a_{n-5}}{a_{n-1}}, \quad \ldots \]

The number of sign changes in the first column of the Routh array equals the number of roots of \( a(s) \) in the closed right half-plane. The system is stable if and only if all elements in the first column are strictly positive (no sign changes).

Example 5.1: Stability Range via Routh-Hurwitz.

A unity-feedback loop with plant \( G(s) = K/[s(s+1)(s+3)] \) has the closed-loop characteristic polynomial

\[ a(s) = s^3 + 4s^2 + 3s + K. \]

Routh array:

\[ \begin{array}{c|cc} s^3 & 1 & 3 \\ s^2 & 4 & K \\ s^1 & (12 - K)/4 & 0 \\ s^0 & K & \end{array} \]

For stability, both \( (12 - K)/4 > 0 \) and \( K > 0 \) must hold, giving \( 0 < K < 12 \).

At \( K = 12 \), the \( s^1 \) entry is zero. The auxiliary polynomial (formed from the \( s^2 \) row at the zero entry) is \( 4s^2 + 12 = 0 \), giving purely imaginary roots \( s = \pm j\sqrt{3} \): the system is marginally stable and oscillates at \( \sqrt{3}\,\text{rad/s} \). This is the Nyquist gain crossover frequency for this plant.

5.2.1 Special Cases in the Routh Array

Two special cases require modified procedures:

  1. Zero first-column entry (non-zero row): Replace the zero with a small positive \( \epsilon > 0 \), complete the array, and count sign changes as \( \epsilon \to 0^+ \).

  2. Entire row of zeros: This occurs when the polynomial has symmetric roots (e.g., a pair on the imaginary axis, or pairs symmetric about the origin). Form the auxiliary polynomial from the row above the zero row, differentiate it, and use its coefficients to replace the zero row. The roots of the auxiliary polynomial are a subset of roots of \( a(s) \).

5.3 Steady-State Error

The steady-state error of a unity-feedback system to a test input depends on the system type — the number of poles at the origin in the open-loop transfer function \( L(s) = C(s)G(s) \).

System Type and Error Constants. Write \( L(s) = K_{dc}\,N(s)/[s^N D(s)] \) where \( N(s) \) and \( D(s) \) have no roots at the origin and \( N(0) = D(0) = 1 \). The integer \( N \) is the system type.
  • Position (step) error constant: \( K_p = \lim_{s\to 0} L(s) \). Steady-state error \( e_{ss} = 1/(1+K_p) \).
  • Velocity (ramp) error constant: \( K_v = \lim_{s\to 0} s\,L(s) \). Steady-state error \( e_{ss} = 1/K_v \).
  • Acceleration error constant: \( K_a = \lim_{s\to 0} s^2 L(s) \). Steady-state error \( e_{ss} = 1/K_a \).

For a Type \( N \) system: a step gives zero error for \( N \geq 1 \), a ramp gives zero error for \( N \geq 2 \), and a parabola gives zero error for \( N \geq 3 \).

Increasing the system type improves steady-state tracking but generally degrades stability margins, which is why the trade-off between tracking accuracy and stability is central to controller design.


Chapter 6: Root Locus Method

6.1 The Root Locus Concept

The root locus is a plot in the complex s-plane of the closed-loop poles as a real scalar parameter — typically the controller gain \( K \) — varies from zero to infinity. It provides a geometric picture of how stability and transient performance change with gain, giving the designer a direct visual handle on the design problem.

For a unity-feedback system with open-loop transfer function \( L(s) = K\,G(s)H(s) \), the closed-loop characteristic equation is

\[ 1 + K\,G(s)H(s) = 0 \implies K\,G(s)H(s) = -1. \]

Interpreting \( -1 \) in polar form: a point \( s \) lies on the root locus (for positive \( K \)) if and only if

\[ |G(s)H(s)| = \frac{1}{K} \quad \text{(magnitude condition)}, \]\[ \angle G(s)H(s) = (2k+1)\cdot 180°, \quad k \in \mathbb{Z} \quad \text{(angle condition)}. \]

The angle condition alone determines the locus geometry; the magnitude condition then tells us what gain \( K \) places a closed-loop pole at any chosen point on the locus.

6.2 Construction Rules for the Root Locus

Let the open-loop transfer function have \( m \) finite zeros and \( n \) finite poles (\( n \geq m \)).

Root Locus Construction Rules (positive \( K \)).
  1. Number of branches: The root locus has \( n \) branches, one starting at each open-loop pole (at \( K = 0 \)) and ending at an open-loop zero or at infinity (as \( K \to \infty \)).
  2. Symmetry: The root locus is symmetric about the real axis, since complex poles always appear in conjugate pairs.
  3. Real-axis segments: A point on the real axis lies on the root locus if and only if the total number of real poles and real zeros to its right is odd.
  4. Asymptotes: The \( n - m \) branches that go to infinity do so along asymptotes with angles \( \phi_k = (2k+1)\cdot 180°/(n-m) \), \( k = 0, 1, \ldots, n-m-1 \), all emanating from the centroid \( \sigma_a = (\sum\text{poles} - \sum\text{zeros})/(n-m) \).
  5. Breakaway/break-in points: At a breakaway point (where branches leave the real axis) or break-in point (where they return), the gain \( K = -1/G(s)H(s) \) evaluated on the real axis has \( dK/ds = 0 \). Equivalently, \( \sum 1/(s-p_i) = \sum 1/(s-z_j) \).
  6. Angles of departure/arrival: The angle of departure from a complex pole \( p_k \) is \( 180° - \sum_{j}\angle(p_k - z_j) + \sum_{i \neq k}\angle(p_k - p_i) \). The angle of arrival at a complex zero \( z_k \) is \( 180° + \sum_{j}\angle(z_k - p_j) - \sum_{i \neq k}\angle(z_k - z_i) \).
  7. Imaginary axis crossings: Use the Routh-Hurwitz criterion (set the \( s^1 \) row to zero) or substitute \( s = j\omega \) into the characteristic equation and solve for \( \omega \) and \( K \).
Example 6.1: Sketching the Root Locus for \( L(s) = K/[s(s+2)(s+4)] \).

Open-loop poles: \( 0,\,-2,\,-4 \). No finite zeros. \( n = 3 \), \( m = 0 \).

Real-axis segments: To the left of \( -4 \) there are 3 poles to the right (odd), so \( (-\infty, -4] \) is on the locus. Between \( -2 \) and \( 0 \) there is one pole to the right (odd), so \( [-2, 0] \) is also on the locus. The segment \( [-4, -2] \) has two poles to the right (even), so it is NOT on the locus.

Asymptotes: \( n - m = 3 \) asymptotes at angles \( 60°, 180°, 300° \) (-60°), from centroid \( \sigma_a = (0 - 2 - 4)/3 = -2 \).

Breakaway point on \( [-2, 0] \): Set \( d/ds[-1/L(s)] = 0 \): solving gives a breakaway near \( s \approx -0.85 \).

Imaginary-axis crossing: The characteristic polynomial is \( s^3 + 6s^2 + 8s + K = 0 \). Routh array \( s^1 \) element: \( (48 - K)/6 = 0 \Rightarrow K = 48 \). Substituting \( s = j\omega \): \( -j\omega^3 - 6\omega^2 + 8j\omega + 48 = 0 \Rightarrow \omega = 2\sqrt{2}\,\text{rad/s} \).

6.3 PID Controller Design via Pole Placement

The proportional-integral-derivative (PID) controller is the workhorse of industrial control. Its transfer function is

\[ C_{PID}(s) = K_P + \frac{K_I}{s} + K_D s = \frac{K_D s^2 + K_P s + K_I}{s}. \]

The integral term \( K_I/s \) adds a pole at the origin, making the system type 1 (zero steady-state error to step inputs). The derivative term \( K_D s \) adds a zero, which can attract root locus branches into the desired design region.

Pole placement approach: Specify desired dominant poles \( s_d = -\zeta\omega_n \pm j\omega_d \). For a PD controller (acting like a zero at \( s = -K_P/K_D \)), choose the zero location to satisfy the angle condition at \( s_d \):

\[ \angle G(s_d) + \angle(s_d + K_P/K_D) = \pm 180°. \]

For a PI controller (adding a pole at origin and zero at \( -K_I/K_P \)), the zero is placed close to the origin to minimally disturb the dominant poles while eliminating steady-state error.

6.3.1 Ziegler-Nichols Tuning

The Ziegler-Nichols method provides heuristic starting values for PID gains without requiring a detailed model.

Step-response method (open-loop): Apply a step to the open-loop plant and fit the response to a first-order-plus-dead-time model \( G \approx K\,e^{-Ls}/(Ts+1) \). Then:

\[ K_P = \frac{1.2T}{KL}, \quad K_I = K_P/(2L), \quad K_D = 0.5\,K_P\,L. \]

Ultimate gain method (closed-loop): Increase proportional gain (with \( K_I = K_D = 0 \)) until the system oscillates at the ultimate gain \( K_u \) with ultimate period \( T_u \). Then:

\[ K_P = 0.6\,K_u, \quad K_I = K_P/(0.5\,T_u), \quad K_D = 0.125\,K_P\,T_u. \]

Ziegler-Nichols rules typically give \( \%OS \approx 25\% \), which is often too large for the final design. They serve as a starting point for iterative refinement.


Chapter 7: Frequency-Domain Analysis

7.1 Frequency Response

For an LTI system with transfer function \( G(s) \), the steady-state response to a sinusoidal input \( u(t) = A\sin(\omega t) \) is a sinusoid at the same frequency with amplitude scaled by \( |G(j\omega)| \) and phase shifted by \( \angle G(j\omega) \):

\[ y_{ss}(t) = A\,|G(j\omega)|\,\sin\!\left(\omega t + \angle G(j\omega)\right). \]

The function \( G(j\omega) \) — obtained by substituting \( s = j\omega \) into the transfer function — is the frequency response function. It is a complex-valued function of the real variable \( \omega \).

The frequency response is the fundamental object for frequency-domain design: it can be measured directly on a physical plant using a spectrum analyser, it does not require knowledge of the system order or an explicit ODE, and it leads naturally to the graphical stability analysis tools (Bode plots, Nyquist plots).

7.2 Bode Plots

A Bode plot consists of two graphs plotted against \( \omega \) on a logarithmic scale: the magnitude \( 20\log_{10}|G(j\omega)| \) in decibels (dB), and the phase \( \angle G(j\omega) \) in degrees. The logarithmic frequency axis compresses a wide dynamic range and, crucially, converts the multiplicative combination of transfer function factors into additive combination of Bode plots — each factor contributes independently.

7.2.1 Asymptotic Bode Plots for Basic Factors

Every rational transfer function is a product of four types of elementary factors:

  1. Constant gain \( K \): Magnitude is \( 20\log|K| \) dB (horizontal line); phase is \( 0° \) (or \( 180° \) for \( K < 0 \)).

  2. Integrator/differentiator \( (j\omega)^{\pm N} \): Magnitude is a straight line of slope \( \pm 20N \) dB/decade passing through 0 dB at \( \omega = 1 \); phase is \( \pm 90N° \).

  3. First-order factor \( (1 + j\omega/\omega_c)^{\pm 1} \): The asymptotic magnitude is 0 dB for \( \omega \ll \omega_c \) and slopes at \( \pm 20 \) dB/decade for \( \omega \gg \omega_c \); the two asymptotes meet at the corner frequency \( \omega_c \). The actual magnitude at \( \omega_c \) is \( \pm 3 \) dB. The phase runs from \( 0° \) to \( \pm 90° \), reaching \( \pm 45° \) at \( \omega_c \), and approximates a linear ramp of \( \pm 45°/\text{decade} \) from \( 0.1\omega_c \) to \( 10\omega_c \).

  4. Second-order factor \( [(j\omega/\omega_n)^2 + 2\zeta(j\omega/\omega_n) + 1]^{\pm 1} \): Asymptotic magnitude is 0 dB for \( \omega \ll \omega_n \) and \( \pm 40 \) dB/decade for \( \omega \gg \omega_n \). For \( \zeta < 0.707 \), a resonant peak of height \( -20\log(2\zeta\sqrt{1-\zeta^2}) \) dB appears near \( \omega_n \); the asymptotic approximation ignores this peak. Phase transitions from \( 0° \) to \( \pm 180° \), passing through \( \pm 90° \) at \( \omega_n \).

Example 7.1: Bode Plot of a Type-1 Second-Order Plant.

Sketch the Bode plot for \( G(j\omega) = 10/[j\omega(0.1j\omega + 1)] \).

Factors: Gain 10 (+20 dB), integrator \( 1/j\omega \) (−20 dB/decade slope, −90° phase), first-order lag \( 1/(0.1j\omega+1) \) (corner at \( \omega_c = 10 \) rad/s).

Magnitude: Start at high value for low \( \omega \). At \( \omega = 1 \): \( 20\log(10/1) = 20 \) dB. Slope is −20 dB/decade up to \( \omega = 10 \), then −40 dB/decade beyond. Gain crossover (0 dB) occurs near \( \omega_{gc} \approx 9.5 \) rad/s.

Phase: \( -90° \) from the integrator, transitioning to \( -90° - 45° = -135° \) at \( \omega = 10 \) and approaching \( -180° \) at high frequencies.

7.2.2 Minimum-Phase vs. Non-Minimum-Phase Systems

A transfer function is minimum phase if it has no zeros or poles in the ORHP. For a minimum-phase system, the magnitude Bode plot uniquely determines the phase Bode plot via the Bode relations (Hilbert transform). A non-minimum-phase system (RHP zero or delay \( e^{-Ls} \)) has more phase lag than a minimum-phase system with the same magnitude response, limiting the achievable bandwidth.

A pure time delay \( e^{-Ls} \) has constant unity magnitude but phase \( -\omega L \) (in radians), which becomes increasingly negative at high frequencies. The approximation \( e^{-Ls} \approx (1 - Ls/2)/(1 + Ls/2) \) (first-order Padé) is useful for root locus analysis.

7.3 Nyquist Stability Criterion

The Nyquist criterion is a frequency-domain test for closed-loop stability based on the open-loop frequency response \( L(j\omega) = C(j\omega)G(j\omega) \). It requires no root finding and works directly from measured frequency response data.

Nyquist Stability Criterion. Let the open-loop transfer function \( L(s) \) have \( P \) poles in the closed right half-plane. Plot the Nyquist diagram of \( L(s) \): the map of the Nyquist contour (a large clockwise semicircle enclosing the entire right half-plane) under \( L(s) \). Let \( N \) be the net number of clockwise encirclements of the point \( -1 + j0 \) by the Nyquist diagram. Then the number of closed-loop poles in the CRHP is \( Z = N + P \). For stability, \( Z = 0 \), which requires \( N = -P \) (i.e., \( P \) counter-clockwise encirclements if \( P \neq 0 \), or no encirclements if \( P = 0 \)).

For the common case of an open-loop stable plant (\( P = 0 \)) with a proportional controller, the simplified Nyquist criterion states: the closed-loop system is stable if and only if the Nyquist plot of \( L(j\omega) \) does not encircle the \( -1 \) point. The Nyquist plot is the polar plot of \( L(j\omega) \) as \( \omega \) goes from \( -\infty \) to \( +\infty \) (or equivalently, from \( 0 \) to \( +\infty \) and its mirror image).

7.4 Stability Margins

Stability margins quantify how far the system is from instability — a measure of robustness. They can be read directly from the Bode plot or the Nyquist plot.

Gain Crossover Frequency \( \omega_{gc} \). The frequency at which \( |L(j\omega_{gc})| = 1 \) (0 dB). The phase margin is defined at this frequency.

Phase Margin (PM). \( PM = 180° + \angle L(j\omega_{gc}) \). It measures how much additional phase lag at \( \omega_{gc} \) would push the phase to \( -180° \), which would cause instability.

Phase Crossover Frequency \( \omega_{pc} \). The frequency at which \( \angle L(j\omega_{pc}) = -180° \). The gain margin is defined at this frequency.

Gain Margin (GM). \( GM = -20\log_{10}|L(j\omega_{pc})| \) dB. It measures how much the gain can be increased (in dB) before instability.

For a minimum-phase system with a single gain and phase crossover, the closed-loop system is stable if and only if both \( PM > 0 \) and \( GM > 0 \). Typical design targets are \( PM \geq 45° \) and \( GM \geq 6\,\text{dB} \), which provide adequate robustness against gain variations and phase degradation (from unmodelled high-frequency dynamics, delays, or nonlinearities).

The phase margin is directly related to the closed-loop damping ratio: for a second-order system, \( PM \approx 100\zeta \) degrees for \( \zeta \lesssim 0.6 \), a useful rule of thumb for preliminary design.


Chapter 8: Frequency-Domain Design

8.1 Loop Shaping Philosophy

In frequency-domain controller design, the goal is to shape the open-loop transfer function \( L(j\omega) = C(j\omega)G(j\omega) \) so that:

  • Low frequencies: \( |L(j\omega)| \gg 1 \) to ensure accurate tracking and disturbance rejection.
  • Crossover region: The magnitude slope at the gain crossover frequency \( \omega_{gc} \) is approximately \( -20 \) dB/decade (a first-order rolloff), and \( \angle L(j\omega_{gc}) \geq -135° \) to achieve a phase margin of at least \( 45° \).
  • High frequencies: \( |L(j\omega)| \ll 1 \) to attenuate measurement noise and avoid exciting unmodelled high-frequency resonances.

The compensator \( C(s) \) is designed to modify the plant Bode plot so that the shaped loop \( L(j\omega) \) meets these targets. Four standard compensator types do most of the work: phase lead, phase lag, lead-lag, and integral (PI/PID).

8.2 Phase Lead Compensator

A phase lead compensator has the transfer function

\[ C_{lead}(s) = K_c \cdot \frac{s + z}{s + p}, \quad z < p, \]

where the ratio \( \alpha = z/p < 1 \) determines the amount of phase lead. The maximum phase lead is

\[ \phi_{max} = \arcsin\!\left(\frac{1 - \alpha}{1 + \alpha}\right), \]

achieved at frequency \( \omega_{max} = \sqrt{zp} \) (the geometric mean of the zero and pole). To achieve \( \phi_{max} \) at the desired new crossover frequency, place \( \omega_{max} \) at \( \omega_{gc,new} \).

The design procedure is:

  1. Compute the required additional phase lead \( \phi_{required} = PM_{desired} - PM_{uncompensated} + 5° \) (the extra \( 5° \) accounts for the phase drop from the gain increase).
  2. From \( \phi_{max} = \phi_{required} \), solve for \( \alpha = (1 - \sin\phi_{max})/(1 + \sin\phi_{max}) \).
  3. The lead compensator adds \( 1/\alpha \) in gain magnitude at \( \omega_{max} \). Find the frequency where the uncompensated \( |L_0(j\omega)| = \sqrt{\alpha} \) (−3 dB if \( \alpha = 0.1 \)); this is the new crossover and equals \( \omega_{max} \).
  4. Set the compensator zero and pole: \( z = \omega_{max}\sqrt{\alpha} \), \( p = \omega_{max}/\sqrt{\alpha} \).
  5. Adjust \( K_c \) so that the compensated crossover occurs at \( \omega_{max} \).

Lead compensation increases the crossover frequency (improves bandwidth) and adds phase margin, at the expense of some increase in high-frequency gain (more noise sensitivity).

Example 8.1: Lead Compensator for a DC Motor Position Loop.

Plant: \( G(s) = 1/[s(s+1)] \). Uncompensated \( K = 1 \). Desired: \( PM \geq 50° \), crossover \( \omega_{gc} \geq 2\,\text{rad/s} \).

Uncompensated phase at \( \omega = 2 \): \( \angle G(j2) = -90° - \arctan(2) \approx -90° - 63.4° = -153.4° \), so uncompensated \( PM \approx 26.6° \). Required additional lead: \( \phi_{max} \approx 50° - 26.6° + 5° = 28.4° \).

\( \alpha = (1 - \sin 28.4°)/(1 + \sin 28.4°) = (1-0.476)/(1+0.476) = 0.355 \).

At \( \omega_{max} = 2 \) rad/s, the compensator provides gain \( 1/\sqrt{\alpha} = 1/\sqrt{0.355} \approx 1.68 \). Need uncompensated magnitude at \( \omega_{max} \) to equal \( \sqrt{\alpha}/1 \): \( |G(j2)| = 1/[2\sqrt{5}] \approx 0.224 \). Set \( K_c = 1/(|G(j2)|/\sqrt{\alpha}) \) appropriately (details depend on exact gains).

Compensator: \( C_{lead}(s) = K_c(s+0.84)/(s+2.37) \) approximately. The lead zero at 0.84 and pole at 2.37 add \( 28.4° \) at \( \omega = \sqrt{0.84 \times 2.37} = \sqrt{2} \cdot \sqrt{1} \approx 1.41 \) rad/s — close to target (exact calculation requires iteration).

8.3 Phase Lag Compensator

A phase lag compensator has the transfer function

\[ C_{lag}(s) = K_c \cdot \frac{s + z}{s + p}, \quad p < z, \]

where now \( \beta = z/p > 1 \). The lag compensator attenuates high frequencies, allowing the gain crossover frequency to be reduced, which increases phase margin if the phase is more favourable at lower frequencies.

The design procedure exploits the gain attenuation \( 1/\beta \) at high frequencies:

  1. Find the frequency \( \omega_{gc,new} \) where the uncompensated phase \( \angle L_0(j\omega_{gc,new}) \geq -180° + PM_{desired} + 5° \) (the extra \( 5° \) compensates for the small phase lag introduced by the lag zero and pole).
  2. Set \( K_c \cdot \beta = 1/|L_0(j\omega_{gc,new})| \) to bring the magnitude to 0 dB at \( \omega_{gc,new} \).
  3. Place the lag zero and pole well below \( \omega_{gc,new} \), typically \( z = \omega_{gc,new}/10 \), \( p = z/\beta \).

Lag compensation improves low-frequency gain (steady-state accuracy) and phase margin at the cost of reduced bandwidth.

8.3.1 PI Controller as Lag Compensator

A proportional-integral controller \( C_{PI}(s) = K_P(1 + 1/(\tau_I s)) = K_P(\tau_I s + 1)/(\tau_I s) \) is equivalent to a lag compensator with the pole placed at the origin. This has the effect of adding a system type (zero steady-state error to ramps), at the cost of reduced phase margin and slower response. The integrator windup problem — where the integral state saturates during large transients — must be addressed in implementation via anti-windup schemes.

8.4 Lead-Lag Compensator

A lead-lag compensator combines a lag stage (for low-frequency gain and steady-state accuracy) with a lead stage (for bandwidth and phase margin):

\[ C_{LL}(s) = K_c \cdot \underbrace{\frac{s + z_1}{s + p_1}}_{\text{lag, } p_1 < z_1} \cdot \underbrace{\frac{s + z_2}{s + p_2}}_{\text{lead, } z_2 < p_2}. \]

Design proceeds by treating the two stages independently: first design the lead stage to achieve the desired bandwidth and phase margin, then design the lag stage to achieve the required low-frequency gain without significantly degrading the phase margin at crossover.

A PID controller is the op-amp implementation of a lead-lag network with an integrator: the zero pair \( (s + z_1)(s + z_2)/s \) adds phase lead over a band while the integrator eliminates steady-state error.

8.5 Frequency Response Specifications

Beyond gain and phase margins, several additional frequency-domain performance indicators are defined.

Bandwidth \( \omega_{BW} \). The frequency at which the closed-loop magnitude \( |T(j\omega)| \) drops 3 dB below its DC value. The bandwidth is the most direct indicator of closed-loop speed and is approximately related to the closed-loop natural frequency: \( \omega_{BW} \approx \omega_n\sqrt{1-2\zeta^2 + \sqrt{4\zeta^4 - 4\zeta^2 + 2}} \). For \( \zeta = 0.5 \), \( \omega_{BW} \approx 1.27\omega_n \).

Resonant Peak \( M_r \). The maximum value of \( |T(j\omega)| \) over all \( \omega \), achieved at the resonant frequency \( \omega_r = \omega_n\sqrt{1-2\zeta^2} \) (for \( \zeta < 1/\sqrt{2} \)). \( M_r = 1/(2\zeta\sqrt{1-\zeta^2}) \). A large \( M_r \) indicates low damping and poor transient response.

A practically useful relationship: for a minimum-phase system, the phase margin at crossover is approximately related to the resonant peak via \( M_r \approx 1/(2\sin(PM/2)) \) for \( PM \leq 60° \). This provides a quick conversion between frequency-domain and time-domain performance indicators.


Chapter 9: Op-Amp Implementation of Controllers

9.1 Analog Controller Realisation

A distinguishing feature of ECE 380 relative to control courses in other disciplines is the emphasis on analog implementation — realising a designed compensator \( C(s) \) as a physical op-amp circuit. This is directly relevant to applications where digital processors are unavailable or where the control bandwidth far exceeds what a microcontroller can achieve (e.g., audio electronics, power electronics, PLLs).

9.2 Inverting Op-Amp Topology

The inverting op-amp configuration provides the basic building block. With input impedance \( Z_1(s) \) and feedback impedance \( Z_f(s) \):

\[ C(s) = -\frac{Z_f(s)}{Z_1(s)}. \]

The sign inversion is addressed by cascading two inverting stages or by choosing the error-summing topology appropriately.

9.2.1 Proportional Controller

\( Z_1 = R_1 \), \( Z_f = R_f \): \( C(s) = -R_f/R_1 = -K_P \). Setting \( R_f/R_1 = K_P \).

9.2.2 PI Controller (Lag Network)

\( Z_1 = R_1 \), \( Z_f = R_f + 1/(Cs) = (R_f C s + 1)/(Cs) \):

\[ C(s) = -\frac{R_f C s + 1}{R_1 C s} = -K_P\left(1 + \frac{1}{\tau_I s}\right), \quad K_P = \frac{R_f}{R_1}, \quad \tau_I = R_f C. \]
Example 9.1: PI Compensator Design.

Design an op-amp PI controller with \( K_P = 5 \) and \( \tau_I = 0.1\,\text{s} \).

Choose \( C = 1\,\mu\text{F} \). Then \( R_f = \tau_I/C = 0.1/(10^{-6}) = 100\,\text{k}\Omega \) and \( R_1 = R_f/K_P = 100000/5 = 20\,\text{k}\Omega \). Use standard \( 1\% \) tolerance resistors: \( R_f = 100\,\text{k}\Omega \), \( R_1 = 20\,\text{k}\Omega \), \( C = 1\,\mu\text{F} \). The output sign inversion can be corrected by following with a unity-gain inverter (a second op-amp with equal input and feedback resistors).

9.2.3 PD Controller with Practical Derivative

Pure derivative \( Z_1 = 1/(Cs) \), \( Z_f = R_f \) gives \( C(s) = -R_f C s \), which amplifies noise without bound. In practice, add a small resistor \( R_d \) in series with the input capacitor:

\[ Z_1 = R_d + 1/(Cs) = (R_d C s + 1)/(Cs), \quad C(s) = -\frac{R_f C s}{R_d C s + 1} = -K_D \frac{s}{s + 1/\tau_d}, \]

where \( K_D = R_f/R_d \) and \( \tau_d = R_d C \). The pole at \( -1/\tau_d \) limits high-frequency gain.

9.2.4 Lead Compensator as RC Network

A lead compensator \( C_{lead}(s) = K_c(s+z)/(s+p) \) with \( z < p \) is realised by the cascaded RC circuit. The lag compensator \( (s+z)/(s+p) \) with \( p < z \) is implemented by a different passive RC topology or using an op-amp with appropriate impedances. The Bode plot of the target compensator drives the impedance value selection.


Chapter 10: State-Space Representation

10.1 State-Space Models

While the transfer function provides a compact input-output description, it hides the internal structure of the system. The state-space representation reveals this structure and forms the foundation for modern (post-1960) control theory.

State-Space Model. A linear, time-invariant system is described by \[ \dot{\mathbf{x}}(t) = A\,\mathbf{x}(t) + B\,u(t), \]\[ y(t) = C\,\mathbf{x}(t) + D\,u(t), \]

where \( \mathbf{x}(t) \in \mathbb{R}^n \) is the state vector, \( u(t) \in \mathbb{R} \) is the scalar input, \( y(t) \in \mathbb{R} \) is the scalar output, \( A \in \mathbb{R}^{n \times n} \) is the system matrix, \( B \in \mathbb{R}^{n \times 1} \) is the input matrix, \( C \in \mathbb{R}^{1 \times n} \) is the output matrix, and \( D \in \mathbb{R} \) is the feedthrough} term.

The transfer function is recovered by Laplace-transforming the state equations:

\[ G(s) = C(sI - A)^{-1}B + D. \]

The poles of \( G(s) \) are a subset of the eigenvalues of \( A \) (all of them, unless there are pole-zero cancellations).

A given transfer function can be realised in many different state-space forms (controllable canonical form, observable canonical form, modal form, etc.). The controllable canonical form places the characteristic polynomial coefficients directly in the last row of \( A \), which simplifies pole placement calculations.

10.2 Controllability and Observability

Two structural properties — controllability and observability — determine whether state feedback and state estimation are possible.

Controllability. The system \( (A, B) \) is controllable if, for any initial state \( \mathbf{x}(0) \) and any target state \( \mathbf{x}_f \), there exists a finite-time input \( u(t) \) that drives the system from \( \mathbf{x}(0) \) to \( \mathbf{x}_f \). Equivalently, the controllability matrix \[ \mathcal{C} = \begin{bmatrix} B & AB & A^2 B & \cdots & A^{n-1}B \end{bmatrix} \]

has full rank (rank \( n \)).

Observability. The system \( (A, C) \) is observable if, given the input \( u(t) \) and output \( y(t) \) over a finite time interval, the initial state \( \mathbf{x}(0) \) can be uniquely determined. Equivalently, the observability matrix \[ \mathcal{O} = \begin{bmatrix} C \\ CA \\ CA^2 \\ \vdots \\ CA^{n-1} \end{bmatrix} \]

has full rank (rank \( n \)).

Controllability is dual to observability: the pair \( (A, B) \) is controllable if and only if \( (A^T, B^T) \) is observable. This duality means that design techniques for state feedback extend directly, by transposition, to observer design.

10.3 State Feedback and Pole Placement

If the system is controllable, one can design a state feedback law \( u(t) = -\mathbf{K}\mathbf{x}(t) + r(t) \) (where \( \mathbf{K} \in \mathbb{R}^{1 \times n} \) is the feedback gain vector and \( r(t) \) is the reference input) to place the closed-loop poles at any desired locations. The closed-loop system matrix becomes \( A - B\mathbf{K} \), and its eigenvalues are the closed-loop poles.

The design procedure (Ackermann’s formula or direct matching) computes \( \mathbf{K} \) such that

\[ \det(sI - A + B\mathbf{K}) = (s - p_1)(s - p_2)\cdots(s-p_n), \]

where \( p_1, \ldots, p_n \) are the desired closed-loop poles, chosen using the transient specification methodology of Chapter 4.

Ackermann's Formula. For a controllable single-input system: \[ \mathbf{K} = \mathbf{e}_n^T \mathcal{C}^{-1} \phi(A), \]

where \( \mathbf{e}_n^T = [0 \;\cdots\; 0 \; 1] \) and \( \phi(A) \) is the desired characteristic polynomial evaluated at the matrix \( A \): \( \phi(A) = A^n + \alpha_{n-1}A^{n-1} + \cdots + \alpha_0 I \), with \( \phi(s) = (s-p_1)\cdots(s-p_n) \).

State feedback requires measuring (or estimating) the entire state vector, which may not be physically accessible. The Luenberger observer solves this problem.

10.4 State Observers (Luenberger Observer)

Luenberger Observer. Given the observable system \( (A, B, C) \), the full-order state observer reconstructs the state as \( \hat{\mathbf{x}} \) using \[ \dot{\hat{\mathbf{x}}} = A\hat{\mathbf{x}} + Bu + L(y - C\hat{\mathbf{x}}) = (A - LC)\hat{\mathbf{x}} + Bu + Ly, \]

where \( L \in \mathbb{R}^{n \times 1} \) is the observer gain vector. The estimation error \( \mathbf{e} = \mathbf{x} - \hat{\mathbf{x}} \) satisfies

\[ \dot{\mathbf{e}} = (A - LC)\mathbf{e}. \]

The observer gain \( L \) is chosen so that \( A - LC \) has all eigenvalues in the OLHP, making the estimation error converge to zero exponentially. The observer pole placement problem is dual to the state feedback pole placement: compute \( L \) such that \( A - LC \) has the desired observer poles (typically placed two to five times further left than the controller poles so that estimation errors decay before significantly affecting the closed-loop response).

10.5 Separation Principle

Separation Principle. For a controllable and observable LTI system, the state feedback gain \( \mathbf{K} \) and the observer gain \( L \) can be designed independently. The combined controller-observer system has closed-loop eigenvalues equal to the union of the state feedback eigenvalues (eigenvalues of \( A - B\mathbf{K} \)) and the observer eigenvalues (eigenvalues of \( A - LC \)). That is, the two designs do not interfere with each other.

The proof follows directly from writing the closed-loop equations for the combined system in a transformed coordinate system \( (\mathbf{x}, \mathbf{e}) \): the \( A \) matrix for the combined system is block upper triangular with diagonal blocks \( A - B\mathbf{K} \) and \( A - LC \), so the eigenvalues are simply the union of the two sets.

Practical implications. The separation principle justifies the standard engineering workflow: (1) choose the desired closed-loop poles based on transient specifications; (2) compute the state feedback gain \( \mathbf{K} \) using Ackermann's formula; (3) choose observer poles (typically faster by a factor of 2–5 to ensure the observer tracks the state before it is needed by the controller); (4) compute the observer gain \( L \) dually; (5) implement the combined controller-observer as a dynamic compensator \( C(s) \) that can be realised in hardware (analog) or software (digital). The resulting compensator has the same form as the classical controllers of Chapters 8 and 9 but is derived systematically from a state-space model.

Chapter 11: Synthesis and Design Examples

11.1 Integrating Classical and State-Space Design

Classical (transfer function / frequency-domain) design and modern (state-space) design are complementary, not competing, frameworks. Classical methods offer intuitive graphical tools — root locus and Bode plots — that expose the trade-offs between performance, stability, and robustness in terms that a practising engineer can visualise and reason about. State-space methods offer systematic synthesis of multi-variable controllers and observers, guaranteed stability margins via the algebraic Riccati equation (in the LQR setting), and a natural framework for simulation and implementation. An ECE 380 graduate is expected to be fluent in both languages.

11.2 Complete Design Example: DC Motor Position Control

Plant. A DC motor with transfer function (position output, voltage input):

\[ G(s) = \frac{10}{s(s + 2)}. \]

Specifications. Closed-loop \( \%OS \leq 10\% \), settling time \( t_s \leq 1\,\text{s} \) (2% criterion), zero steady-state position error to a step reference.

Step 1: Translate Specifications to s-Plane Requirements

\( \%OS \leq 10\% \Rightarrow \zeta \geq 0.591 \). Radial lines at \( \pm \arccos(0.591) \approx \pm 53.8° \).

\( t_s \leq 1\,\text{s} \Rightarrow \sigma = \zeta\omega_n \geq 4/1 = 4\,\text{s}^{-1} \). Vertical line at \( \text{Re}(s) = -4 \).

A suitable dominant pair: \( s_d = -4 \pm j5.3 \) (\( \zeta = 0.603 \), \( \omega_n = 6.63 \), \( \%OS \approx 9.5\% \), \( t_s \approx 1\,\text{s} \)).

The plant already has a pole at the origin, making the system Type 1 (zero steady-state step error is automatic for unity feedback).

Step 2: Root Locus Analysis

With proportional control \( C(s) = K \), the closed-loop characteristic polynomial is \( s(s+2) + 10K = s^2 + 2s + 10K \). The root locus is a circle centred at \( s = -1 \) (the centroid between pole at 0 and pole at −2). As \( K \) increases, the poles move to \( s = -1 \pm j\sqrt{10K-1} \). At \( s_d = -4 \pm j5.3 \), we need \( -1 \pm j\sqrt{10K-1} = -4 \pm j5.3 \): no solution, since the real part is fixed at \( -1 \). Proportional control alone cannot place poles at \( \text{Re}(s) = -4 \). A compensator is required.

Step 3: Lead Compensator Design via Root Locus

Add a zero at \( s = -5 \) and a pole at \( s = -20 \) (well beyond the desired region): \( C_{lead}(s) = K(s+5)/(s+20) \).

Angle check at \( s_d = -4 + j5.3 \):

\[ \angle G(s_d)C_{lead}(s_d) = \angle\frac{10(s+5)}{s(s+2)(s+20)}\bigg|_{s=-4+j5.3}. \]

Computing each angle contribution: \( \angle s_d = \angle(-4+j5.3) = 180° - \arctan(5.3/4) = 180° - 52.9° = 127.1° \) (measured from the origin pole), \( \angle(s_d + 2) = \angle(-2+j5.3) = 180° - \arctan(5.3/2) \approx 180° - 69.3° = 110.7° \), \( \angle(s_d + 20) = \angle(16+j5.3) = \arctan(5.3/16) \approx 18.3° \), \( \angle(s_d + 5) = \angle(1+j5.3) \approx \arctan(5.3/1) = 79.3° \).

Total angle: \( -127.1° - 110.7° - 18.3° + 79.3° = -176.8° \approx -180° \). The angle condition is satisfied (within rounding). Therefore \( s_d \) lies on the root locus.

Gain: \( K = 1/|G(s_d)C_{lead}(s_d) / K| \) evaluated at \( s_d \) yields approximately \( K \approx 2.6 \).

Verification: Compensated open-loop is \( L(s) = 2.6 \cdot 10(s+5)/[s(s+2)(s+20)] \). The gain margin and phase margin can be confirmed from the Bode plot of \( L(j\omega) \).

Step 4: Op-Amp Realisation

The lead compensator \( C_{lead}(s) = 2.6(s+5)/(s+20) \) is realised with an inverting op-amp stage:

\[ Z_1(s) = R_1 \| (1/C_1 s) = \frac{R_1}{R_1 C_1 s + 1}, \quad Z_f(s) = R_f, \]

giving \( C(s) = -R_f / [R_1 / (R_1 C_1 s + 1)] = -(R_f / R_1)(R_1 C_1 s + 1) = -K(s + 1/(R_1 C_1)) \). This is a PD stage. To get both zero and pole, use \( Z_1 = R_1 + 1/(C_1 s) \) and \( Z_f = R_f \| (1/(C_f s)) \):

\[ C(s) = -\frac{R_f / (R_f C_f s + 1)}{R_1 + 1/(C_1 s)} = -\frac{R_f C_1 s}{(R_1 C_1 s + 1)(R_f C_f s + 1)}. \]

For the target \( C_{lead}(s) \propto (s+5)/(s+20) \): set zero \( 1/(R_1 C_1) = 5 \) and pole \( 1/(R_f C_f) = 20 \). Choosing \( C_1 = C_f = 0.1\,\mu\text{F} \): \( R_1 = 1/(5 \times 0.1 \times 10^{-6}) = 2\,\text{M}\Omega \) (too large; scale down by choosing \( C = 1\,\mu\text{F} \): \( R_1 = 200\,\text{k}\Omega \), \( R_f = 50\,\text{k}\Omega \)).


11.3 Key Formulae Reference

The table below collects the central formulae for quick reference during problem solving.

QuantityFormula
Closed-loop TF (unity feedback)\( T(s) = G(s)C(s)/[1+G(s)C(s)] \)
Sensitivity function\( S(s) = 1/[1+L(s)] \)
Second-order \( \%OS \)\( 100\,e^{-\pi\zeta/\sqrt{1-\zeta^2}} \)
Peak time\( t_p = \pi/\omega_d \), \( \omega_d = \omega_n\sqrt{1-\zeta^2} \)
Settling time (2%)\( t_s \approx 4/(\zeta\omega_n) \)
Max phase lead\( \phi_{max} = \arcsin[(1-\alpha)/(1+\alpha)] \) at \( \omega_{max} = \sqrt{zp} \)
Phase margin approx.\( PM \approx 100\zeta \) degrees (valid for \( \zeta \lesssim 0.6 \))
Resonant peak\( M_r = 1/(2\zeta\sqrt{1-\zeta^2}) \)
Position error constant\( K_p = \lim_{s\to 0} L(s) \), \( e_{ss} = 1/(1+K_p) \)
Velocity error constant\( K_v = \lim_{s\to 0} sL(s) \), \( e_{ss} = 1/K_v \)
Controllability matrix\( \mathcal{C} = [B\;AB\;\cdots\;A^{n-1}B] \)
Observability matrix\( \mathcal{O} = [C^T\;A^TC^T\;\cdots\;(A^T)^{n-1}C^T]^T \)
Ackermann’s formula\( \mathbf{K} = \mathbf{e}_n^T\mathcal{C}^{-1}\phi(A) \)
Observer error dynamics\( \dot{\mathbf{e}} = (A - LC)\mathbf{e} \)

Sources

  • G.F. Franklin, J.D. Powell & A. Emami-Naeini, Feedback Control of Dynamic Systems, 8th ed., Pearson, 2019.
  • R.C. Dorf & R.H. Bishop, Modern Control Systems, 14th ed., Pearson.
  • K.J. Åström & R.M. Murray, Feedback Systems: An Introduction for Scientists and Engineers, Princeton University Press (open access at fbsbook.org).
  • L. Qiu & K. Zhou, Introduction to Feedback Control, Prentice Hall.
  • MIT OpenCourseWare 6.302 Feedback Systems.
Back to top