SE 380: Introduction to Feedback Control
Yash Pant
Estimated study time: 1 hr 41 min
Table of contents
Sources and References
Primary reference — Chris Nielsen, ECE 380 Course Notes, University of Waterloo (public course notes).
Supplementary — Richard C. Dorf & Robert H. Bishop, Modern Control Systems, 14th ed., Pearson, 2022.
Online resources — MIT OCW 6.302 (Feedback System Design); Karl J. Åström & Richard M. Murray, Feedback Systems: An Introduction for Scientists and Engineers, 2nd ed., Princeton University Press (open-access at fbsbook.org); Katsuhiko Ogata, Modern Control Engineering, 5th ed., Prentice Hall; Gene F. Franklin, J. David Powell & Abbas Emami-Naeini, Feedback Control of Dynamic Systems, 8th ed., Pearson.
Chapter 1: Introduction to Feedback Control
1.1 Why Control?
Every engineered system exists in a world that resists our intentions. A car’s engine must maintain a desired speed despite hills, wind, and varying payloads. A chemical reactor must hold a precise temperature despite fluctuating feed concentrations. A quadrotor drone must stay level despite gusts that knock it sideways. The discipline of control engineering is concerned with designing systems — usually involving sensors, actuators, and a computational element — that force physical plants to behave in ways they would not naturally behave on their own.
The word “control” in everyday speech means many things, but in engineering it refers to a specific technical activity: choosing inputs to a dynamical system so that its outputs track a desired reference signal, reject disturbances, and remain stable. Control theory provides the mathematical tools to analyse whether a given design achieves these goals and to synthesise new designs when it does not.
1.2 Open-Loop vs. Closed-Loop Control
The most fundamental architectural choice in control system design is whether to use feedback.
1.2.1 Open-Loop Control
In an open-loop (feedforward) architecture, the controller computes its output solely from the reference input, without measuring the actual plant output. A simple example is a toaster: the user sets a timer, and the heating element runs for that duration regardless of whether the bread is actually toasted. Another example is a stepper motor driven by a fixed pulse sequence in a 3D printer — there is no encoder measuring actual position.
The strength of open-loop control is its simplicity. The weakness is that it cannot correct for disturbances, model uncertainty, or parameter variations. If the plant changes (the bread is thicker, the motor skips a step), the open-loop controller has no way to know.
1.2.2 Closed-Loop (Feedback) Control
In a closed-loop architecture, the controller measures the plant output, compares it to the desired reference, forms an error signal, and computes a corrective input. This is feedback: information flows from the plant output back to the controller input.
The canonical feedback loop is:
\[ e(t) = r(t) - y(t), \qquad u(t) = C(e(t)), \qquad y(t) = P(u(t)) \]where \( r(t) \) is the reference (setpoint), \( y(t) \) is the measured output, \( e(t) \) is the tracking error, \( u(t) \) is the control input (actuator signal), \( C \) denotes the controller, and \( P \) denotes the plant.
1.2.3 Advantages of Feedback
Feedback confers four principal advantages that open-loop control cannot provide:
Disturbance rejection. If an external disturbance pushes the output away from the reference, the feedback loop detects the resulting error and applies a corrective input.
Robustness to model uncertainty. Even if the plant’s parameters are poorly known or slowly time-varying, the closed-loop system may still perform acceptably because the controller continuously corrects for deviations.
Stabilisation of unstable plants. Some plants are inherently unstable — an inverted pendulum, for example, will fall without active control. Feedback can stabilise such systems.
Improved transient response. Feedback can be designed to make the output respond more quickly or with less overshoot than the plant would exhibit naturally.
These advantages come at a cost: feedback can also destabilise a system if poorly designed, and sensors introduce noise into the loop. The art of control engineering lies in capturing the benefits while managing the costs.
1.3 Motivating Examples
1.3.1 Cruise Control
A vehicle’s cruise control system measures vehicle speed \( v \) via a wheel-speed sensor, compares it to the driver’s desired speed \( v_d \), and commands the throttle (and sometimes the brakes) accordingly. The plant is the vehicle’s longitudinal dynamics, which includes engine torque, aerodynamic drag, and road slope. A simple model is:
\[ m\dot{v} = F_{\text{engine}}(u) - F_{\text{drag}}(v) - mg\sin\theta \]where \( m \) is vehicle mass, \( u \) is the throttle command, and \( \theta \) is the road angle. Cruise control must reject disturbances (hills, headwinds) and handle model uncertainty (passenger load, tyre pressure). A purely open-loop system would require knowing \( \theta \) in advance and modelling the engine perfectly — impractical. A feedback controller solves both problems naturally.
1.3.2 Thermostat
A home thermostat is perhaps the oldest and most ubiquitous feedback controller. It measures room temperature \( T \), compares it to the setpoint \( T_{\text{set}} \), and switches the furnace or air conditioner on or off. This is an example of bang-bang (on/off) control — a nonlinear control law — rather than proportional control. The result is that room temperature oscillates around the setpoint (hysteresis), but the average tracking error is small.
1.3.3 Autonomous Driving
A self-driving vehicle must simultaneously control lateral position (steering), longitudinal velocity (throttle/brakes), and heading. Each of these is a closed-loop control problem. For lane keeping, the controller measures the vehicle’s lateral deviation from the lane centre using cameras or lidar, and steers to reduce that deviation. The difficulties are that the plant dynamics (tyre–road interaction) are nonlinear, the environment changes rapidly, and the system must operate safely across a vast range of conditions. Modern autonomous driving systems use a hierarchy of control loops operating at different timescales.
Chapter 2: Mathematical Modelling of Dynamical Systems
2.1 Nonlinear State-Space Models
Physical systems are typically described by ordinary differential equations (ODEs) derived from first principles — Newton’s laws, Kirchhoff’s laws, conservation of energy, and so forth. The standard form for an \( n \)-dimensional dynamical system is:
\[ \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t),\, u(t)) \]\[ y(t) = \mathbf{h}(\mathbf{x}(t),\, u(t)) \]where \( \mathbf{x}(t) \in \mathbb{R}^n \) is the state vector, \( u(t) \in \mathbb{R} \) is the scalar control input, \( y(t) \in \mathbb{R} \) is the scalar output, \( \mathbf{f}: \mathbb{R}^n \times \mathbb{R} \to \mathbb{R}^n \) is the state dynamics function, and \( \mathbf{h}: \mathbb{R}^n \times \mathbb{R} \to \mathbb{R} \) is the output map.
Example 2.1 — Cruise Control State-Space Model.
Model the vehicle as a point mass \( m \) subject to a propulsive force \( u \) (the throttle input, treated as a force for simplicity) and aerodynamic drag \( b v \) where \( b > 0 \) is a drag coefficient. Newton’s second law gives:
\[ m\dot{v} = u - bv \]Taking the state \( x = v \) (scalar), we have \( \dot{x} = -\frac{b}{m}x + \frac{1}{m}u \) and output \( y = x = v \). This is already linear — a happy coincidence that makes cruise control analysis straightforward.
Example 2.2 — Inverted Pendulum on a Cart.
Let \( \theta \) be the angle of the pendulum from the upright vertical (so \( \theta = 0 \) is the desired unstable equilibrium), and let \( x_c \) be the cart position. With appropriate simplifications (massless rod, cart driven by a force input \( u \)), the pendulum angle satisfies:
\[ \ddot{\theta} = \frac{g}{L}\sin\theta - \frac{u}{mL}\cos\theta \]Taking state \( \mathbf{x} = [\theta,\, \dot{\theta}]^T \), the nonlinear state-space form is:
\[ \dot{x}_1 = x_2, \qquad \dot{x}_2 = \frac{g}{L}\sin x_1 - \frac{u}{mL}\cos x_1 \]This is nonlinear in \( x_1 \) and must be linearised before applying classical control techniques.
2.2 Equilibrium Points and Linearisation
Most practical control systems operate near a desired operating condition. Classical control theory is built on the assumption of linearity, so the standard approach is to linearise the nonlinear model around an equilibrium point.
Definition — Equilibrium Point. A state \( \mathbf{x}^* \) is an equilibrium point corresponding to constant input \( u^* \) if \( \mathbf{f}(\mathbf{x}^*, u^*) = \mathbf{0} \). At an equilibrium, \( \dot{\mathbf{x}} = \mathbf{0} \), so the state does not change.
2.2.1 Linearisation via Taylor Expansion
Derivation — Linearisation via First-Order Taylor Expansion.
Let \( \delta\mathbf{x}(t) = \mathbf{x}(t) - \mathbf{x}^* \) and \( \delta u(t) = u(t) - u^* \) be small perturbations from equilibrium. Expanding \( \mathbf{f}(\mathbf{x}, u) \) in a Taylor series about \( (\mathbf{x}^*, u^*) \):
\[ \mathbf{f}(\mathbf{x}, u) = \mathbf{f}(\mathbf{x}^*, u^*) + \left.\frac{\partial \mathbf{f}}{\partial \mathbf{x}}\right|_{*} \delta\mathbf{x} + \left.\frac{\partial \mathbf{f}}{\partial u}\right|_{*} \delta u + O(\|\delta\mathbf{x}\|^2, \delta u^2) \]Since \( \mathbf{f}(\mathbf{x}^*, u^*) = \mathbf{0} \), and dropping higher-order terms:
\[ \dot{\delta\mathbf{x}} = A\,\delta\mathbf{x} + B\,\delta u \]where the Jacobian matrices are:
\[ A = \left.\frac{\partial \mathbf{f}}{\partial \mathbf{x}}\right|_{(\mathbf{x}^*, u^*)}, \qquad B = \left.\frac{\partial \mathbf{f}}{\partial u}\right|_{(\mathbf{x}^*, u^*)} \]Similarly, the output linearises to \( \delta y = C\,\delta\mathbf{x} + D\,\delta u \) where \( C = \frac{\partial \mathbf{h}}{\partial \mathbf{x}}\big|_* \) and \( D = \frac{\partial \mathbf{h}}{\partial u}\big|_* \). This is valid only for small \( \delta\mathbf{x} \) and \( \delta u \).
Example 2.3 — Linearising the Inverted Pendulum.
At the upright equilibrium \( \mathbf{x}^* = [0, 0]^T \), \( u^* = 0 \). Computing the Jacobian:
\[ A = \begin{bmatrix} 0 & 1 \\ g/L & 0 \end{bmatrix}, \qquad B = \begin{bmatrix} 0 \\ -1/(mL) \end{bmatrix} \]using \( \sin\theta \approx \theta \) and \( \cos\theta \approx 1 \) near \( \theta = 0 \). The linearised system is:
\[ \dot{\delta\mathbf{x}} = A\,\delta\mathbf{x} + B\,\delta u \]The eigenvalues of \( A \) are \( \pm\sqrt{g/L} \), confirming that the upright equilibrium is unstable (one positive eigenvalue).
2.3 Linear Time-Invariant (LTI) Systems
A linear time-invariant system in state-space form is:
\[ \dot{\mathbf{x}}(t) = A\mathbf{x}(t) + Bu(t) \]\[ y(t) = C\mathbf{x}(t) + Du(t) \]where \( A \in \mathbb{R}^{n\times n} \), \( B \in \mathbb{R}^{n\times 1} \), \( C \in \mathbb{R}^{1\times n} \), \( D \in \mathbb{R} \). The solution to this ODE is given by the variation-of-parameters formula:
\[ \mathbf{x}(t) = e^{At}\mathbf{x}(0) + \int_0^t e^{A(t-\tau)} B u(\tau)\,d\tau \]where \( e^{At} = \sum_{k=0}^\infty \frac{(At)^k}{k!} \) is the matrix exponential.
2.4 Laplace Transform Review
The Laplace transform converts differential equations into algebraic equations, which are far easier to manipulate. For a signal \( f(t) \) defined for \( t \geq 0 \):
\[ F(s) = \mathcal{L}\{f(t)\} = \int_0^\infty f(t)\,e^{-st}\,dt \]Key transform pairs used throughout this course:
| Time-domain signal | Laplace transform |
|---|---|
| \( \delta(t) \) (impulse) | \( 1 \) |
| \( 1(t) \) (unit step) | \( 1/s \) |
| \( e^{at}1(t) \) | \( 1/(s-a) \) |
| \( \sin(\omega t)1(t) \) | \( \omega/(s^2+\omega^2) \) |
| \( \cos(\omega t)1(t) \) | \( s/(s^2+\omega^2) \) |
| \( t^n 1(t) \) | \( n!/s^{n+1} \) |
Key properties: linearity, differentiation (\( \mathcal{L}\{\dot{f}\} = sF(s) - f(0^-) \)), integration (\( \mathcal{L}\{\int_0^t f\} = F(s)/s \)), and convolution (\( \mathcal{L}\{f * g\} = F(s)G(s) \)).
Theorem — Final Value Theorem (FVT). If \( f(t) \) has a Laplace transform \( F(s) \) and if \( \lim_{t\to\infty} f(t) \) exists (equivalently, if \( sF(s) \) has all poles in the open left half-plane), then:
\[ \lim_{t\to\infty} f(t) = \lim_{s\to 0} s F(s) \]Proof of the Final Value Theorem.
Start from the Laplace transform of the derivative:
\[ \int_0^\infty \dot{f}(t)\,e^{-st}\,dt = sF(s) - f(0) \]Take the limit \( s \to 0^+ \) (approaching from the right half-plane to ensure convergence):
\[ \lim_{s\to 0^+} \left[sF(s) - f(0)\right] = \int_0^\infty \dot{f}(t)\,dt = \lim_{t\to\infty} f(t) - f(0) \]Rearranging gives \( \lim_{t\to\infty} f(t) = \lim_{s\to 0^+} sF(s) \). The condition that \( sF(s) \) has no poles on or to the right of the imaginary axis ensures the integral of \( \dot{f} \) converges.
2.5 Transfer Functions
Definition — Transfer Function. For an LTI system with input \( u \) and output \( y \), the transfer function is the ratio of the Laplace transform of the output to the Laplace transform of the input, assuming zero initial conditions:
\[ G(s) = \frac{Y(s)}{U(s)}\bigg|_{\text{zero I.C.}} \]For the state-space system \( (\dot{\mathbf{x}} = A\mathbf{x} + Bu,\ y = C\mathbf{x} + Du) \), taking the Laplace transform with zero initial conditions gives \( sX(s) = AX(s) + BU(s) \), so \( X(s) = (sI - A)^{-1}BU(s) \) and therefore:
\[ G(s) = C(sI - A)^{-1}B + D \]The transfer function is a rational function of \( s \): \( G(s) = \frac{N(s)}{D(s)} \) where \( N(s) \) and \( D(s) \) are polynomials. The roots of \( D(s) \) are the poles and the roots of \( N(s) \) are the zeros.
The transfer function entirely characterises the input-output behaviour of an LTI system. Different state-space realisations can produce the same transfer function.
2.6 Block-Diagram Algebra
Block diagrams are graphical representations of LTI systems where blocks represent transfer functions and arrows represent signal flow. Three fundamental interconnections are:
Series (cascade): \( G_{\text{total}}(s) = G_1(s)\,G_2(s) \)
Parallel: \( G_{\text{total}}(s) = G_1(s) + G_2(s) \)
Negative feedback loop: With plant \( P(s) \) in the forward path and controller \( C(s) \) in the feedback path, the closed-loop transfer function from reference \( R \) to output \( Y \) is:
\[ T(s) = \frac{C(s)P(s)}{1 + C(s)P(s)} \]This is the most important formula in classical control. The product \( L(s) = C(s)P(s) \) is called the loop transfer function or loop gain.
2.7 Signal Flow Graphs and Mason’s Gain Rule
A signal flow graph (SFG) is an alternative to block diagrams. Nodes represent signals and directed branches represent gains. Mason’s gain formula provides a systematic method to compute the transfer function of an SFG directly.
Mason’s Gain Rule. The transfer function from source node \( s \) to output node \( t \) is:
\[ G = \frac{1}{\Delta}\sum_k P_k \Delta_k \]where the sum is over all forward paths \( k \) from \( s \) to \( t \); \( P_k \) is the gain of the \( k \)th forward path; \( \Delta = 1 - \sum L_i + \sum L_i L_j - \cdots \) is the graph determinant (sum of loop gains, minus sum of products of non-touching loop gains, plus sum of products of three mutually non-touching loop gains, etc.); and \( \Delta_k \) is the cofactor of path \( k \) (the graph determinant with all loops touching path \( k \) removed).
Mason’s rule is particularly useful for multi-loop systems where repeated block-diagram reduction would be tedious.
Chapter 3: Linear System Theory and Frequency Response
3.1 Response of LTI Systems
Given an LTI system with transfer function \( G(s) \) and input \( u(t) \), the output in the Laplace domain is \( Y(s) = G(s)U(s) \). In the time domain, this corresponds to the convolution:
\[ y(t) = (g * u)(t) = \int_0^t g(t - \tau)\,u(\tau)\,d\tau \]where \( g(t) = \mathcal{L}^{-1}\{G(s)\} \) is the impulse response of the system.
The impulse response \( g(t) \) uniquely characterises the LTI system’s behaviour: knowledge of \( g(t) \) for all \( t \geq 0 \) is equivalent to knowledge of \( G(s) \).
3.2 Stability
Definition — Asymptotic Stability. The LTI system \( \dot{\mathbf{x}} = A\mathbf{x} \) is asymptotically stable if every trajectory starting from a finite initial condition satisfies \( \mathbf{x}(t) \to \mathbf{0} \) as \( t \to \infty \). For LTI systems, this is equivalent to requiring that all eigenvalues of \( A \) (equivalently, all poles of the transfer function) have strictly negative real parts — i.e., lie in the open left half-plane (LHP).
Definition — BIBO Stability. A system is bounded-input bounded-output (BIBO) stable if every bounded input \( u(t) \) (i.e., \( |u(t)| \leq M < \infty \) for all \( t \)) produces a bounded output \( y(t) \). For an LTI system, BIBO stability is equivalent to the impulse response being absolutely integrable: \( \int_0^\infty |g(t)|\,dt < \infty \). For a rational transfer function with no pole-zero cancellations, BIBO stability is equivalent to all poles being in the open LHP.
Remark — Relationship Between Asymptotic and BIBO Stability. For systems described by their state-space representation, asymptotic stability implies BIBO stability. The converse holds if the system is minimal (observable and controllable). Hidden unstable modes — unstable poles cancelled by zeros — can make a system BIBO stable but not asymptotically stable; such systems are dangerous in practice because the internal state may grow unboundedly.
3.3 Frequency Response
A remarkable property of stable LTI systems is their response to sinusoidal inputs. If the input is \( u(t) = A\sin(\omega t) \) and the system is asymptotically stable, then the steady-state output is:
\[ y_{\text{ss}}(t) = A\,|G(j\omega)|\,\sin\!\left(\omega t + \angle G(j\omega)\right) \]That is, the output is also sinusoidal at the same frequency \( \omega \), with amplitude scaled by \( |G(j\omega)| \) and phase shifted by \( \angle G(j\omega) \). The function \( G(j\omega) \) obtained by substituting \( s = j\omega \) into the transfer function is the frequency response.
Definition — Frequency Response. The frequency response of an LTI system with transfer function \( G(s) \) is the complex-valued function \( G(j\omega) \) of real frequency \( \omega \). Its magnitude \( |G(j\omega)| \) is the gain at frequency \( \omega \), and \( \angle G(j\omega) \) is the phase shift.
3.4 Bode Plots
The Bode plot is a two-panel graphical representation of the frequency response:
- Magnitude plot: \( 20\log_{10}|G(j\omega)| \) in decibels (dB) vs. \( \log_{10}\omega \) (log-log scale)
- Phase plot: \( \angle G(j\omega) \) in degrees vs. \( \log_{10}\omega \) (log-linear scale)
Using logarithms turns multiplication into addition, making it easy to construct the Bode plot of a product of transfer functions by summing the individual Bode plots.
3.4.1 Bode Plot Rules for Standard Factors
Every rational transfer function can be factored into a product of four types of elementary terms. Their asymptotic Bode plots:
Constant gain \( K \): Magnitude is \( 20\log|K| \) dB (flat line); phase is \( 0° \) if \( K > 0 \) or \( -180° \) if \( K < 0 \).
Integrator \( 1/s \) (or differentiator \( s \)): Magnitude slope is \( -20 \) dB/decade (or \( +20 \) dB/decade); phase is \( -90° \) (or \( +90° \)), constant.
Real pole at \( s = -a \), i.e., \( 1/(1 + s/a) \): For \( \omega \ll a \): magnitude ≈ 0 dB, phase ≈ 0°. For \( \omega \gg a \): magnitude slope \( -20 \) dB/decade, phase ≈ \( -90° \). Transition at \( \omega = a \) (break frequency).
Complex conjugate poles at \( s = -\zeta\omega_n \pm j\omega_n\sqrt{1-\zeta^2} \): The standard second-order factor is \( \omega_n^2/(s^2 + 2\zeta\omega_n s + \omega_n^2) \). For \( \omega \ll \omega_n \): 0 dB, 0°. For \( \omega \gg \omega_n \): slope \( -40 \) dB/decade, phase \( -180° \). For small \( \zeta \), there is a resonance peak near \( \omega = \omega_n \) of height approximately \( 1/(2\zeta) \) (or \( -20\log(2\zeta) \) dB).
Example 3.1 — Bode Plot Sketch for a First-Order System.
Consider \( G(s) = \frac{10}{s + 2} \). Rewrite as \( G(s) = \frac{5}{1 + s/2} \). DC gain: \( G(0) = 5 \), or \( 20\log_{10}(5) \approx 14 \) dB. Break frequency: \( \omega_b = 2 \) rad/s.
Magnitude: Flat at 14 dB for \( \omega \ll 2 \); falls at \( -20 \) dB/decade for \( \omega \gg 2 \); at \( \omega = 2 \) the true magnitude is \( 14 - 3 = 11 \) dB.
Phase: \( 0° \) for \( \omega \ll 2 \); transitions to \( -90° \) for \( \omega \gg 2 \); exactly \( -45° \) at \( \omega = 2 \).
3.5 Nyquist Plots
The Nyquist plot (also called the polar plot) traces the curve \( G(j\omega) \) in the complex plane as \( \omega \) varies from \( -\infty \) to \( +\infty \). Since \( G(j(-\omega)) = \overline{G(j\omega)} \) for real-coefficient transfer functions, the plot for negative frequencies is the mirror image of the plot for positive frequencies about the real axis. In practice, we plot \( G(j\omega) \) for \( \omega \geq 0 \) and note the reflection.
Key features to identify on a Nyquist plot:
- The point \( (-1, 0) \) is special: if the loop gain \( L(j\omega) \) passes through \( -1 \), the denominator \( 1 + L(j\omega) = 0 \), indicating marginal stability of the closed-loop system.
- The distance from the curve to \( (-1, 0) \) provides a geometric measure of stability margins.
Chapter 4: First- and Second-Order System Dynamics
4.1 First-Order Systems
A first-order LTI system has a single state and transfer function:
\[ G(s) = \frac{K}{\tau s + 1} = \frac{K/\tau}{s + 1/\tau} \]where \( K \) is the DC (static) gain and \( \tau > 0 \) is the time constant. The step response (response to input \( u(t) = 1(t) \)) is:
\[ y(t) = K\left(1 - e^{-t/\tau}\right), \quad t \geq 0 \]Remark — Physical Meaning of the Time Constant. The time constant \( \tau \) sets the timescale of the system’s response. After one time constant (\( t = \tau \)), the step response has reached approximately 63.2% of its final value. After \( 5\tau \), it is within 1% of the final value, which is typically taken as the settling criterion for first-order systems. The slope of the step response at \( t = 0 \) is \( K/\tau \), so a smaller \( \tau \) means a faster initial response.
4.2 Second-Order Systems
The standard second-order transfer function is:
\[ G(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2} \]where \( \omega_n > 0 \) is the natural frequency and \( \zeta \geq 0 \) is the damping ratio. The poles are:
\[ s_{1,2} = -\zeta\omega_n \pm \omega_n\sqrt{\zeta^2 - 1} \]The behaviour depends critically on \( \zeta \):
- \( \zeta > 1 \) (overdamped): Two distinct real negative poles; exponential decay, no oscillation.
- \( \zeta = 1 \) (critically damped): Repeated real pole at \( s = -\omega_n \); fastest non-oscillatory response.
- \( 0 < \zeta < 1 \) (underdamped): Complex conjugate poles \( s_{1,2} = -\sigma \pm j\omega_d \) where \( \sigma = \zeta\omega_n \) and \( \omega_d = \omega_n\sqrt{1-\zeta^2} \) is the damped natural frequency; oscillatory response.
- \( \zeta = 0 \) (undamped): Poles on the imaginary axis; sustained oscillation; marginally stable.
4.3 Step Response Characteristics
For an underdamped second-order system (\( 0 < \zeta < 1 \)), the unit step response is:
\[ y(t) = 1 - \frac{e^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\sin\!\left(\omega_d t + \phi\right) \]where \( \phi = \arccos\zeta \). The key performance specifications are:
Definition — Step Response Specifications.
Rise time \( T_r \): time to rise from 10% to 90% of the final value (approximately \( T_r \approx 1.8/\omega_n \) for moderate \( \zeta \)).
Peak time \( T_p \): time at which the first overshoot peak occurs:
- Percent overshoot \( \%OS \): fractional excess over the final value at the first peak:
- Settling time \( T_s \): time after which \( |y(t) - y_\infty| \leq 0.02 y_\infty \) (2% criterion). A useful approximation is \( T_s \approx 4/(\zeta\omega_n) \).
Example 4.1 — Second-Order Step Response Parameters.
A second-order system has \( \omega_n = 5 \) rad/s and \( \zeta = 0.5 \). Compute the step response specifications.
\( \sigma = \zeta\omega_n = 2.5 \), \( \omega_d = \omega_n\sqrt{1-\zeta^2} = 5\sqrt{0.75} \approx 4.33 \) rad/s.
Peak time: \( T_p = \pi/\omega_d = \pi/4.33 \approx 0.725 \) s.
Percent overshoot: \( \%OS = 100\exp(-\pi \cdot 0.5/\sqrt{0.75}) \approx 100\exp(-1.814) \approx 16.3\% \).
Settling time (2%): \( T_s \approx 4/(0.5 \cdot 5) = 1.6 \) s.
4.4 Poles in the Complex Plane
The location of the poles in the \( s \)-plane directly encodes the step response characteristics:
- Real part \( -\sigma = -\zeta\omega_n \): determines decay rate. The further left, the faster the decay. The settling time \( T_s \approx 4/\sigma \) depends only on the real part.
- Imaginary part \( \omega_d \): determines oscillation frequency. Peak time \( T_p = \pi/\omega_d \) depends only on the imaginary part.
- Distance from origin \( \omega_n = |s_{1,2}| \): natural frequency.
- Angle from negative real axis \( \theta = \arccos\zeta \): the overshoot depends only on \( \zeta \), which corresponds to this angle. Lines of constant \( \zeta \) are radial lines; lines of constant \( \sigma \) are vertical lines; lines of constant \( \omega_d \) are horizontal lines.
4.5 Effects of Adding Poles and Zeros
4.5.1 Dominant Poles
If one pair of complex poles is much closer to the origin than the others, the distant poles decay rapidly and the long-term response is dominated by the near poles. In this case, the system can be approximated by a second-order model. This is the dominant pole approximation: valid when the ratio of real parts is at least 5:1.
4.5.2 Adding a Zero
Adding a zero \( s = -z \) to the numerator of a second-order transfer function changes the step response even if the poles are unchanged. A zero in the LHP (minimum-phase zero, \( z > 0 \)) increases overshoot and decreases rise time. A zero in the RHP (non-minimum phase zero, \( z < 0 \)) causes an initial undershoot — the response first moves in the wrong direction before recovering. This is a fundamental limitation that cannot be overcome with feedback alone.
Remark — Non-Minimum Phase Systems. Non-minimum phase zeros arise physically in systems with internal delays or cross-coupling. For example, an aircraft with a very forward centre of gravity may initially pitch down when up-elevator is applied before the aerodynamic forces correct it. Non-minimum phase behaviour limits achievable closed-loop bandwidth — attempting to control too aggressively causes instability.
Chapter 5: Feedback Stability and Steady-State Performance
5.1 Closed-Loop Transfer Function
Consider the standard unity-feedback loop with plant \( P(s) \) and controller \( C(s) \). The loop transfer function is \( L(s) = C(s)P(s) \). The closed-loop transfer function from reference \( R(s) \) to output \( Y(s) \) is:
\[ T(s) = \frac{L(s)}{1 + L(s)} = \frac{C(s)P(s)}{1 + C(s)P(s)} \]The transfer function from disturbance \( D(s) \) (entering at the plant input) to output is:
\[ G_{d}(s) = \frac{P(s)}{1 + C(s)P(s)} = \frac{P(s)}{1 + L(s)} \]The transfer function from reference to tracking error \( E(s) = R(s) - Y(s) \) is:
\[ S(s) = \frac{1}{1 + L(s)} \]This function \( S(s) \) is called the sensitivity function and is fundamental to understanding performance and robustness.
5.2 Characteristic Equation
Definition — Characteristic Equation. The characteristic equation of the closed-loop system is:
\[ 1 + L(s) = 0 \quad \Longleftrightarrow \quad 1 + C(s)P(s) = 0 \]The roots of the characteristic equation are the closed-loop poles. For stability, all closed-loop poles must lie in the open left half-plane. Equivalently, if \( C(s)P(s) = N(s)/D(s) \) in lowest terms, then:
\[ D(s) + N(s) = 0 \]is the characteristic polynomial whose roots must all have negative real parts.
5.3 Routh-Hurwitz Stability Criterion
Directly computing the roots of the characteristic polynomial is cumbersome for high-order systems. The Routh-Hurwitz criterion allows us to determine stability algebraically from the polynomial coefficients alone, without finding the roots.
Theorem — Routh-Hurwitz Criterion. Consider the characteristic polynomial:
\[ p(s) = a_n s^n + a_{n-1}s^{n-1} + \cdots + a_1 s + a_0, \quad a_n > 0 \]Construct the Routh array:
\[ \begin{array}{c|ccc} s^n & a_n & a_{n-2} & a_{n-4} & \cdots \\ s^{n-1} & a_{n-1} & a_{n-3} & a_{n-5} & \cdots \\ s^{n-2} & b_1 & b_2 & b_3 & \cdots \\ s^{n-3} & c_1 & c_2 & c_3 & \cdots \\ \vdots & & & & \\ s^0 & d_1 & & & \end{array} \]where the entries in each subsequent row are computed from the two rows above:
\[ b_1 = \frac{a_{n-1}a_{n-2} - a_n a_{n-3}}{a_{n-1}}, \quad b_2 = \frac{a_{n-1}a_{n-4} - a_n a_{n-5}}{a_{n-1}}, \ldots \]and similarly for subsequent rows. The number of roots of \( p(s) \) with positive real parts equals the number of sign changes in the first column of the Routh array. The system is stable (all roots in the open LHP) if and only if all entries in the first column are positive (and nonzero).
Sketch of the Routh-Hurwitz First-Column Sign Rule.
The Routh-Hurwitz criterion is a consequence of Sturm’s theorem on sign changes in a Sturm sequence associated with a polynomial and its derivative. The key insight is that the first column of the Routh array forms a sequence of pivot elements in the LU factorisation of the Hurwitz matrix:
\[ H_n = \begin{bmatrix} a_{n-1} & a_{n-3} & a_{n-5} & \cdots \\ a_n & a_{n-2} & a_{n-4} & \cdots \\ 0 & a_{n-1} & a_{n-3} & \cdots \\ \vdots & & & \ddots \end{bmatrix} \]By Sylvester’s criterion, all leading principal minors of \( H_n \) are positive if and only if all eigenvalues of the companion matrix — equivalently, all roots of \( p(s) \) — have negative real parts. The leading minor of order \( k \) relates to the product of the first \( k \) elements in the first column, so sign changes in the first column correspond exactly to sign changes in the minor sequence, each indicating a root in the RHP.
Example 5.1 — Routh-Hurwitz for a Third-Order System.
Determine stability of the closed-loop system with characteristic polynomial:
\[ p(s) = s^3 + 6s^2 + 11s + 6 \]Routh array:
\[ \begin{array}{c|cc} s^3 & 1 & 11 \\ s^2 & 6 & 6 \\ s^1 & \frac{6\cdot 11 - 1\cdot 6}{6} = \frac{60}{6} = 10 & 0 \\ s^0 & 6 & \\ \end{array} \]First column: \( 1, 6, 10, 6 \). All positive — no sign changes, so all three roots are in the LHP. The system is stable. (Roots are \( s = -1, -2, -3 \).)
Now change the polynomial to \( s^3 + 2s^2 + s + 10 \):
\[ \begin{array}{c|cc} s^3 & 1 & 1 \\ s^2 & 2 & 10 \\ s^1 & \frac{2-10}{2} = -4 & 0 \\ s^0 & 10 & \\ \end{array} \]First column: \( 1, 2, -4, 10 \). Two sign changes (\( 2 \to -4 \) and \( -4 \to 10 \)), so two roots are in the RHP. The closed-loop system is unstable.
5.4 Steady-State Error Analysis
5.4.1 Tracking Error
For a unity-feedback system with loop gain \( L(s) \), the steady-state error to a reference input \( R(s) \) is found by applying the Final Value Theorem to the error signal \( E(s) = S(s)R(s) = R(s)/(1+L(s)) \):
\[ e_{\text{ss}} = \lim_{t\to\infty} e(t) = \lim_{s\to 0} s \cdot \frac{R(s)}{1 + L(s)} \]Definition — System Type. The type of a feedback system is the number of pure integrators in the loop transfer function \( L(s) \). If \( L(s) = \frac{K(s+z_1)\cdots}{s^N(s+p_1)\cdots} \), then the system is type \( N \).
Derivation of Steady-State Error Constants.
Position constant (step input \( R(s) = 1/s \)):
\[ e_{\text{ss}} = \lim_{s\to 0} \frac{s \cdot (1/s)}{1 + L(s)} = \frac{1}{1 + \lim_{s\to 0} L(s)} = \frac{1}{1 + K_p} \]where \( K_p = \lim_{s\to 0} L(s) \) is the position constant. For type 0: \( K_p = K \) (finite), so \( e_{\text{ss}} = 1/(1+K) \neq 0 \). For type \( \geq 1 \): \( K_p = \infty \), so \( e_{\text{ss}} = 0 \).
Velocity constant (ramp input \( R(s) = 1/s^2 \)):
\[ e_{\text{ss}} = \lim_{s\to 0} \frac{s \cdot (1/s^2)}{1 + L(s)} = \lim_{s\to 0} \frac{1/s}{1 + L(s)} = \frac{1}{\lim_{s\to 0} sL(s)} = \frac{1}{K_v} \]where \( K_v = \lim_{s\to 0} sL(s) \) is the velocity constant. For type 0: \( K_v = 0 \), so \( e_{\text{ss}} = \infty \) (cannot track a ramp). For type 1: \( K_v = K \) (finite), so \( e_{\text{ss}} = 1/K \). For type \( \geq 2 \): \( K_v = \infty \), so \( e_{\text{ss}} = 0 \).
Acceleration constant (parabolic input \( R(s) = 1/s^3 \)):
\[ e_{\text{ss}} = \frac{1}{K_a}, \quad K_a = \lim_{s\to 0} s^2 L(s) \]Type 0 or 1 systems have \( K_a = 0 \) (infinite error). Type 2 systems have finite \( K_a \). Type \( \geq 3 \) systems have zero acceleration error.
The results are summarised in the following table:
| System Type | Step error | Ramp error | Parabolic error |
|---|---|---|---|
| 0 | \( 1/(1+K_p) \) | \( \infty \) | \( \infty \) |
| 1 | 0 | \( 1/K_v \) | \( \infty \) |
| 2 | 0 | 0 | \( 1/K_a \) |
Remark — Why Integral Action Eliminates Steady-State Error. A proportional controller \( C(s) = K_P \) provides type-0 loop gain, so there will always be a nonzero position error to a step input (unless \( K_P \to \infty \)). Adding an integrator to the controller — as in PI or PID control — raises the system type by one. The loop gain \( L(s) = C(s)P(s) \) now contains \( 1/s \) from the integrator, making \( K_p = \infty \) and driving the steady-state step error to zero. The integrator acts as an internal model of a constant (step) disturbance, continuously accumulating error until it drives the steady-state error to zero. This is an instance of the Internal Model Principle.
Chapter 6: Root-Locus Analysis
6.1 The Root-Locus Concept
The root-locus method, introduced by W.R. Evans in 1948, provides a graphical technique for visualising how the closed-loop poles move in the \( s \)-plane as a scalar gain \( K \) is varied from 0 to \( \infty \). This is enormously useful for understanding how gain affects stability and transient response.
The characteristic equation for a unity-feedback system with loop gain \( KG(s) \) is:
\[ 1 + KG(s) = 0 \quad \Longleftrightarrow \quad G(s) = -\frac{1}{K} \]In polar form, this requires simultaneously:
\[ |G(s)| = \frac{1}{K} \quad \text{(magnitude condition)} \]\[ \angle G(s) = (2k+1)\cdot 180°,\quad k \in \mathbb{Z} \quad \text{(angle condition)} \]The root locus is the set of all \( s \) satisfying the angle condition for some \( K \geq 0 \).
6.2 Rules for Constructing the Root Locus
Root-Locus Construction Rules. Let \( G(s) = \frac{N(s)}{D(s)} \) with \( m \) finite zeros and \( n \) finite poles (\( m \leq n \)).
Rule 1 — Number of branches. The root locus has \( n \) branches (one per closed-loop pole). Each branch starts at a pole of \( G(s) \) (as \( K \to 0 \)) and ends at a zero of \( G(s) \) or infinity (as \( K \to \infty \)).
Rule 2 — Real-axis segments. A point on the real axis lies on the root locus if and only if the total number of real poles and zeros of \( G(s) \) to its right is odd.
Rule 3 — Asymptotes. The \( n - m \) branches that go to infinity do so along asymptotes at angles:
\[ \phi_k = \frac{(2k+1) \cdot 180°}{n - m}, \quad k = 0, 1, \ldots, n-m-1 \]All asymptotes emanate from the centroid:
\[ \sigma_a = \frac{\sum \text{poles} - \sum \text{zeros}}{n - m} \]Rule 4 — Departure angles. The angle of departure from a complex pole \( p_i \) is:
\[ \phi_{\text{dep}} = 180° - \sum_{j\neq i}\angle(p_i - p_j) + \sum_k \angle(p_i - z_k) \]Rule 5 — Breakaway and break-in points. Real-axis breakaway/break-in points satisfy \( dK/ds = 0 \), equivalently \( \frac{d}{ds}[D(s)/N(s)] = 0 \) (or equivalently \( N'D = ND' \)).
Rule 6 — Imaginary axis crossings. Find where the root locus crosses the imaginary axis by substituting \( s = j\omega \) into the characteristic equation and solving for real \( \omega \) and the corresponding \( K \). Alternatively, use the Routh criterion.
Rule 7 — Gain selection. Once the locus is drawn, the gain \( K \) for a desired closed-loop pole at \( s^* \) is found from the magnitude condition:
\[ K = \frac{1}{|G(s^*)|} = \frac{\prod_i |s^* - p_i|}{\prod_j |s^* - z_j|} \]Example 6.1 — Root Locus for a Standard Plant.
Let \( G(s) = \frac{1}{s(s+2)(s+4)} \). There are \( n = 3 \) poles (\( 0, -2, -4 \)) and \( m = 0 \) zeros.
Real-axis segments: From the origin to \( -2 \) (one pole to the right of any point in this interval), and from \( -4 \) to \( -\infty \) (three poles to the right).
Centroid: \( \sigma_a = (0 - 2 - 4)/3 = -2 \).
Asymptote angles: \( \phi_k = (2k+1) \cdot 60° \) for \( k = 0, 1, 2 \): so \( 60°, 180°, 300° \) (equivalently \( 60°, 180°, -60° \)).
Breakaway point: On the segment \( [0, -2] \). Setting derivative to zero: for \( G = 1/(s(s+2)(s+4)) \), we need \( \frac{d}{ds}[s(s+2)(s+4)] = 0 \). Expanding: \( 3s^2 + 12s + 8 = 0 \), so \( s = (-12 \pm \sqrt{144-96})/6 = (-12 \pm \sqrt{48})/6 \approx -0.845 \) and \( -3.155 \). The point \( s \approx -0.845 \) lies in \( [-2, 0] \) and is the breakaway point.
Imaginary axis crossing: Characteristic polynomial is \( s^3 + 6s^2 + 8s + K = 0 \). Routh array:
\[ \begin{array}{c|cc} s^3 & 1 & 8 \\ s^2 & 6 & K \\ s^1 & (48-K)/6 & 0 \\ s^0 & K & \\ \end{array} \]First column sign changes when \( K = 48 \). At \( K = 48 \), the auxiliary equation \( 6s^2 + 48 = 0 \) gives \( s = \pm 2j\sqrt{2} \approx \pm 2.83j \).
So the root locus crosses the imaginary axis at \( \pm 2.83j \) when \( K = 48 \). For \( K < 48 \) the system is stable; for \( K > 48 \) it is unstable.
Remark — Practical Use of Root Locus. Root locus gives the designer intuition about the trade-off between speed of response (real part of dominant poles) and stability margin (distance to imaginary axis). For the example above, increasing \( K \) from 0 moves the dominant poles away from the origin (faster response) but eventually drives them into the RHP (instability) at \( K = 48 \). The designer must choose \( K \) to balance these competing objectives.
Chapter 7: PID and Classical Controller Design
7.1 PID Control
The Proportional-Integral-Derivative (PID) controller is by far the most widely used controller in industrial practice. Surveys consistently find that over 90% of industrial control loops use some form of PID. Its popularity stems from its simplicity, tuning intuitiveness, and effectiveness across a wide range of plants.
The PID control law in continuous time is:
\[ u(t) = K_P\,e(t) + K_I\int_0^t e(\tau)\,d\tau + K_D\,\dot{e}(t) \]In transfer function form:
\[ C(s) = K_P + \frac{K_I}{s} + K_D s = K_P\left(1 + \frac{1}{T_I s} + T_D s\right) \]where \( T_I = K_P/K_I \) is the integral time and \( T_D = K_D/K_P \) is the derivative time.
7.2 Effects of P, I, and D Terms
7.2.1 Proportional Action
The proportional term \( K_P e(t) \) produces a control signal proportional to the current error. Increasing \( K_P \) generally:
- Reduces steady-state error (though cannot eliminate it for a step reference in a type-0 plant)
- Speeds up the response (larger gains push poles further left or faster)
- Can destabilise the system if made too large
7.2.2 Integral Action
The integral term \( K_I \int e\,dt \) accumulates past errors and continues applying control effort as long as any nonzero error persists. This guarantees zero steady-state error to step references for any stable plant (raises system type by 1). However, integral action introduces phase lag and can destabilise or slow the response. Integral windup occurs when the actuator saturates: the integrator continues accumulating a large value that is difficult to unwind, causing overshoot when the setpoint is reached. Anti-windup mechanisms (conditional integration, back-calculation) address this.
7.2.3 Derivative Action
The derivative term \( K_D \dot{e}(t) \) anticipates future error by reacting to the rate of change. It provides phase lead, which can significantly improve stability margin and reduce overshoot. The major drawback is amplification of high-frequency noise: the derivative of a sensor noise signal is large even if the noise amplitude is small. In practice, the derivative is always filtered:
\[ C_D(s) = \frac{K_D s}{1 + s/N} \]where \( N \) is the derivative filter coefficient (typically \( 5 \leq N \leq 20 \)).
7.3 Ziegler-Nichols Tuning
The Ziegler-Nichols methods provide systematic rules for setting PID parameters from simple open-loop or closed-loop experiments.
7.3.1 Open-Loop (Step Response) Method
Apply a step input to the open-loop plant and record the step response. Identify the S-curve shape and draw a tangent line at the inflection point. Read off:
- \( L \): apparent dead time (time before the step response begins to rise)
- \( \tau \): time constant (slope parameter)
Then:
| Controller | \( K_P \) | \( T_I \) | \( T_D \) |
|---|---|---|---|
| P | \( \tau/(KL) \) | \( \infty \) | 0 |
| PI | \( 0.9\tau/(KL) \) | \( 3.3L \) | 0 |
| PID | \( 1.2\tau/(KL) \) | \( 2L \) | \( 0.5L \) |
7.3.2 Closed-Loop (Ultimate Gain) Method
With the controller set to proportional only, increase \( K_P \) until the closed-loop system is at the verge of instability (sustained oscillations). The gain at this point is the ultimate gain \( K_u \) and the period of oscillation is the ultimate period \( T_u \). Then:
| Controller | \( K_P \) | \( T_I \) | \( T_D \) |
|---|---|---|---|
| P | \( 0.5 K_u \) | \( \infty \) | 0 |
| PI | \( 0.45 K_u \) | \( T_u/1.2 \) | 0 |
| PID | \( 0.6 K_u \) | \( T_u/2 \) | \( T_u/8 \) |
Remark — Limitations of Ziegler-Nichols. Ziegler-Nichols tuning was designed to provide approximately 25% overshoot (a quarter-decay ratio). This is often too much for modern applications. Additionally, the method assumes an S-shaped step response, which is characteristic of overdamped plants with delay but not appropriate for all plant types (e.g., integrating plants or plants with resonances). Modern tuning methods — lambda tuning, IMC-based tuning, AMIGO — generally outperform Ziegler-Nichols for demanding applications.
7.4 Pole-Placement Design
Pole placement (also called direct design) specifies the desired closed-loop pole locations and solves for the controller that achieves them. For a PID controller acting on a second-order plant \( P(s) = \omega_n^2/(s^2 + 2\zeta\omega_n s + \omega_n^2) \), the loop transfer function is:
\[ L(s) = C_{\text{PID}}(s) \cdot P(s) = \frac{K_P s^2 + K_I s + K_I K_D/K_P}{s} \cdot \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2} \]The closed-loop characteristic polynomial (of degree 3 or 4 depending on the plant order) is matched to a desired polynomial with specified poles. Equating coefficients yields equations for \( K_P \), \( K_I \), \( K_D \).
Example 7.1 — PID Tuning by Pole Placement.
Plant: \( P(s) = 1/s(s+1) \) (integrating plant). Desired closed-loop poles: \( s = -2 \pm 2j \) and \( s = -10 \) (dominant pair plus fast pole).
Desired characteristic polynomial:
\[ p_d(s) = (s+2-2j)(s+2+2j)(s+10) = (s^2+4s+8)(s+10) = s^3 + 14s^2 + 48s + 80 \]With a proportional-derivative controller \( C(s) = K_P + K_D s \), the loop is \( C(s)/s(s+1) \) and the closed-loop characteristic polynomial is:
\[ s(s+1) + (K_P + K_D s) = s^2 + (1 + K_D)s + K_P \]For a third-order desired polynomial, we need a PID controller. Setting \( C(s) = K_P + K_I/s + K_D s \), the characteristic polynomial becomes:
\[ s^3 + (1+K_D)s^2 + K_P s + K_I \]Matching coefficients: \( 1 + K_D = 14 \Rightarrow K_D = 13 \); \( K_P = 48 \); \( K_I = 80 \).
Chapter 8: Frequency-Domain Stability — Nyquist and Bode
8.1 The Nyquist Stability Criterion
The Nyquist stability criterion provides a graphical method for assessing closed-loop stability from the open-loop frequency response, without needing to find closed-loop poles explicitly. It is based on the argument principle from complex analysis.
Theorem — Cauchy Argument Principle. Let \( F(s) \) be a meromorphic function (analytic except at isolated poles). If \( \Gamma_s \) is a closed contour in the \( s \)-plane that does not pass through any pole or zero of \( F(s) \), then the corresponding contour \( \Gamma_F = F(\Gamma_s) \) in the \( F \)-plane encircles the origin \( N = Z - P \) times (counterclockwise), where \( Z \) is the number of zeros and \( P \) is the number of poles of \( F(s) \) enclosed by \( \Gamma_s \) (counted with multiplicity).
8.1.1 Application to Feedback Systems
Sketch of Nyquist Criterion Derivation from Cauchy’s Principle.
For the standard unity-feedback loop, the closed-loop characteristic function is \( F(s) = 1 + L(s) \). The closed-loop poles are the zeros of \( F(s) \), and the open-loop poles of \( L(s) \) are the poles of \( F(s) \).
Choose \( \Gamma_s \) to be the Nyquist contour: the imaginary axis from \( -jR \) to \( +jR \) plus a large semicircle of radius \( R \to \infty \) in the right half-plane, traversed clockwise. This contour encloses the entire open right half-plane (ORHP), so:
- \( Z_{\text{RHP}} \) = number of zeros of \( F(s) \) in ORHP = number of unstable closed-loop poles
- \( P_{\text{RHP}} \) = number of poles of \( F(s) \) in ORHP = number of unstable open-loop poles
The Nyquist contour maps through \( F(s) = 1 + L(s) \) to produce a closed curve in the \( F \)-plane. The number of clockwise encirclements of the origin by this curve equals \( Z_{\text{RHP}} - P_{\text{RHP}} \).
It is more convenient to plot the image of \( \Gamma_s \) under \( L(s) \) (the Nyquist plot of \( L \)) and count encirclements of the point \( -1 \) instead, since \( F(s) = 1 + L(s) \) and \( F(s) = 0 \Leftrightarrow L(s) = -1 \).
Theorem — Nyquist Stability Criterion. Let \( L(s) = C(s)P(s) \) be the loop transfer function with \( P \) poles in the open right half-plane. Let \( N \) denote the number of counterclockwise (positive) encirclements of the point \( (-1, 0) \) by the Nyquist plot of \( L(s) \) as \( s \) traverses the Nyquist contour. Then the number of unstable closed-loop poles is:
\[ Z_{\text{RHP}} = P_{\text{RHP}} - N \]The closed-loop system is stable if and only if \( Z_{\text{RHP}} = 0 \), i.e., \( N = P_{\text{RHP}} \).
For open-loop stable plants (\( P_{\text{RHP}} = 0 \)): The closed-loop system is stable if and only if the Nyquist plot does not encircle the \( (-1, 0) \) point.
Remark — Systems with Imaginary Axis Poles. If \( L(s) \) has poles on the imaginary axis (e.g., integrators), the Nyquist contour must be indented to the right around these poles using small semicircles of radius \( \epsilon \to 0 \). The contribution of these indentations must be included in the Nyquist plot (they typically produce large arcs). For a pole at \( s = 0 \), the small semicircle maps to a large arc of radius \( \to \infty \) in the \( L(s) \)-plane.
Example 8.1 — Nyquist Criterion for a Simple Loop.
Let \( L(s) = \frac{K}{s(s+1)(s+2)} \). This is an open-loop stable plant except for the integrator. Draw the Nyquist plot and determine the range of \( K > 0 \) for closed-loop stability.
For \( s = j\omega \), \( \omega > 0 \):
\[ L(j\omega) = \frac{K}{j\omega(j\omega+1)(j\omega+2)} \]At \( \omega = 0^+ \): \( |L| \to \infty \), \( \angle L = -90° \) (due to integrator). At \( \omega \to \infty \): \( |L| \to 0 \), \( \angle L \to -270° \).
The Nyquist plot crosses the negative real axis when \( \angle L(j\omega) = -180° \). Compute: \( \angle L = -90° - \arctan(\omega) - \arctan(\omega/2) = -180° \Rightarrow \arctan(\omega) + \arctan(\omega/2) = 90° \). This gives \( \omega^2/2 = 1 \Rightarrow \omega = \sqrt{2} \) rad/s.
At \( \omega = \sqrt{2} \): \( L(j\sqrt{2}) = K/[j\sqrt{2}(1+j\sqrt{2})(2+j\sqrt{2})] \). The magnitude is:
\[ |L(j\sqrt{2})| = \frac{K}{\sqrt{2}\cdot\sqrt{1+2}\cdot\sqrt{4+2}} = \frac{K}{\sqrt{2}\cdot\sqrt{3}\cdot\sqrt{6}} = \frac{K}{6} \]The plot crosses the negative real axis at \( -K/6 \). For no encirclement of \( -1 \): require \( K/6 < 1 \Rightarrow K < 6 \). This matches the Routh-Hurwitz result from Example 6.1 (with \( n-m \) differences accounted for).
8.2 Stability Margins
Definition — Gain Margin and Phase Margin.
Let \( \omega_{pc} \) be the phase crossover frequency: the frequency where \( \angle L(j\omega_{pc}) = -180° \). The gain margin is:
\[ GM = \frac{1}{|L(j\omega_{pc})|} \]equivalently, \( GM_{\text{dB}} = -20\log_{10}|L(j\omega_{pc})| \) dB. The gain margin is the factor by which the loop gain can be increased before the system becomes unstable.
Let \( \omega_{gc} \) be the gain crossover frequency: the frequency where \( |L(j\omega_{gc})| = 1 \) (0 dB). The phase margin is:
\[ PM = 180° + \angle L(j\omega_{gc}) \]The phase margin is the amount of additional phase lag that can be tolerated before instability.
Typical design targets: \( GM \geq 6 \) dB, \( PM \geq 45° \).
Why Phase Margin Measures Robustness.
Suppose the true loop transfer function is \( L'(s) = L(s) e^{-j\phi} \) due to some unmodelled phase lag \( \phi \) (e.g., from actuator delay or unmodelled dynamics). The Nyquist plot of \( L' \) is the Nyquist plot of \( L \) rotated clockwise by \( \phi ° \). For stability, the rotated plot must still not encircle \( -1 \). The critical phase lag that causes the gain-crossover-frequency point to land exactly on the \( -1 \) point is the phase margin. Thus, \( PM \) is the maximum unmodelled phase lag tolerable before the closed-loop becomes unstable, directly measuring robustness to model uncertainty.
Theorem — Bode Stability Criterion (Informal). For minimum-phase systems (no open-loop poles or zeros in the RHP), the closed-loop system with unity feedback is stable if and only if the open-loop Bode magnitude plot crosses 0 dB (gain crossover) at a frequency where the phase is greater than \( -180° \). Equivalently: at the gain crossover frequency, the phase margin must be positive.
Example 8.2 — Computing Gain and Phase Margin from Bode Plot.
For \( L(s) = \frac{10}{s(1+0.1s)(1+0.05s)} \), compute the gain and phase margins.
Gain crossover: \( |L(j\omega_{gc})| = 1 \). Approximately (from Bode asymptotes): \( |L(j\omega)| \approx 10/\omega \) for moderate \( \omega \), giving \( \omega_{gc} \approx 10 \) rad/s.
At \( \omega_{gc} = 10 \): \( \angle L = -90° - \arctan(0.1 \cdot 10) - \arctan(0.05 \cdot 10) = -90° - \arctan(1) - \arctan(0.5) = -90° - 45° - 26.6° = -161.6° \).
Phase margin: \( PM = 180° - 161.6° = 18.4° \). This is below the 45° design target — a poorly designed loop.
Phase crossover: \( \angle L(j\omega_{pc}) = -180° \Rightarrow 90° + \arctan(0.1\omega_{pc}) + \arctan(0.05\omega_{pc}) = 180° \). Numerically, \( \omega_{pc} \approx 20 \) rad/s.
\( |L(j20)| = 10/[20\cdot\sqrt{1+4}\cdot\sqrt{1+1}] = 10/[20\sqrt{10}] \approx 0.158 \).
Gain margin: \( GM = 1/0.158 \approx 6.3 \) (or \( \approx 16 \) dB). This is acceptable.
Chapter 9: Lead-Lag Compensation and Loop Shaping
9.1 The Loop-Shaping Philosophy
Loop shaping is a systematic design methodology that works directly on the open-loop frequency response \( L(j\omega) \) to achieve desired closed-loop properties. The key insight is that closed-loop performance is encoded in the open-loop:
- Tracking bandwidth is related to the gain crossover frequency \( \omega_{gc} \): a higher \( \omega_{gc} \) means the loop has high gain at higher frequencies, enabling better tracking of faster references and rejection of lower-frequency disturbances.
- Stability margins are determined by the phase and gain of \( L \) near \( \omega_{gc} \).
- High-frequency noise rejection requires \( |L(j\omega)| \) to be small for large \( \omega \) (low gain at high frequencies).
- Low-frequency disturbance rejection and steady-state accuracy require \( |L(j\omega)| \) to be large for small \( \omega \).
These requirements pull in opposing directions: we want the loop gain to be large at low frequencies and small at high frequencies, with a clean transition near the bandwidth. The ideal loop shape is approximately:
\[ |L(j\omega)| \approx \frac{\omega_{gc}}{\omega} \](a single integrator slope of \( -20 \) dB/decade) over a range around \( \omega_{gc} \).
9.2 Performance Specifications in the Frequency Domain
Definition — Bandwidth. The closed-loop bandwidth \( \omega_B \) is the frequency at which the closed-loop magnitude response \( |T(j\omega)| \) falls to \( -3 \) dB (approximately \( 1/\sqrt{2} \approx 0.707 \)) of its DC value. Higher bandwidth means faster step response: approximately \( T_r \approx 1.8/\omega_B \).
The bandwidth is closely related to (though not identical to) the gain crossover frequency \( \omega_{gc} \). For a well-designed loop with adequate phase margin, \( \omega_B \approx \omega_{gc} \).
9.3 Lead Compensator Design
A lead compensator adds positive phase (phase advance) near the gain crossover frequency, improving phase margin and allowing for higher gain crossover frequency (faster response).
Definition — Lead Compensator. A lead compensator has the form:
\[ C_{\text{lead}}(s) = K_c \frac{s + z}{s + p}, \quad 0 < z < p \]equivalently written as \( C_{\text{lead}}(s) = K_c \alpha \frac{1 + s/z}{1 + s/p} \) where \( \alpha = z/p < 1 \). The zero is at \( s = -z \) and the pole at \( s = -p \), with \( p > z > 0 \).
The phase contributed at frequency \( \omega \) is:
\[ \phi(\omega) = \arctan(\omega/z) - \arctan(\omega/p) > 0 \]Maximum phase lead: \( \phi_{\text{max}} = \arcsin\!\left(\frac{1-\alpha}{1+\alpha}\right) \) occurs at frequency \( \omega_{\text{max}} = \sqrt{zp} = \sqrt{z \cdot p} \).
9.3.1 Lead Compensator Design Procedure
- Determine the gain crossover frequency \( \omega_{gc,0} \) and phase margin \( PM_0 \) of the uncompensated loop \( L_0(s) = K P(s) \).
- Determine the additional phase needed: \( \phi_{\text{need}} = PM_{\text{desired}} - PM_0 + \epsilon \) (add 5–10° to account for the phase lost from the shift in \( \omega_{gc} \)).
- Compute \( \alpha \) from: \( \sin\phi_{\text{need}} = (1-\alpha)/(1+\alpha) \Rightarrow \alpha = (1 - \sin\phi_{\text{need}})/(1 + \sin\phi_{\text{need}}) \).
- Place \( \omega_{\text{max}} \) at the new desired gain crossover frequency \( \omega_{gc}^* \). At \( \omega_{gc}^* \), the uncompensated loop gain must equal \( 1/\sqrt{\alpha} \) (to ensure \( |L(\omega_{gc}^*)| = 1 \) after compensation):
- Set \( z = \omega_{gc}^*/\sqrt{\alpha^{-1}} \) and \( p = z/\alpha \).
- Set the compensator gain \( K_c \) to place \( |C_{\text{lead}}(j\omega_{gc}^*)P(j\omega_{gc}^*)| = 1 \).
Example 9.1 — Lead Compensator Design.
Plant: \( P(s) = \frac{1}{s(s+1)} \). Specifications: Phase margin \( \geq 50° \), gain crossover frequency \( \omega_{gc} \approx 2 \) rad/s.
Step 1: Uncompensated loop with \( K = 1 \): \( L_0(j\omega) = 1/(j\omega(j\omega+1)) \). At \( \omega = 1 \): \( |L_0| = 1/\sqrt{2} = 0.707 \), \( \angle L_0 = -90° - 45° = -135° \), \( PM_0 = 45° \). Need 5° more phase: \( \phi_{\text{need}} = 50° - 45° + 5° = 10° \). (But let us place \( \omega_{gc} = 2 \) rad/s.)
Step 2: At \( \omega = 2 \): \( |L_0(j2)| = 1/(2\sqrt{5}) \approx 0.224 \) and \( \angle L_0(j2) = -90° - \arctan(2) = -90° - 63.4° = -153.4° \). Current \( PM = 180° - 153.4° = 26.6° \). Need: \( \phi_{\text{need}} = 50° - 26.6° + 5° = 28.4° \).
Step 3: \( \sin(28.4°) \approx 0.476 \), so \( \alpha = (1-0.476)/(1+0.476) \approx 0.355 \).
Step 4: Place \( \omega_{\text{max}} = 2 \) rad/s. \( z = \omega_{\text{max}}\sqrt{\alpha} = 2\sqrt{0.355} \approx 1.19 \), \( p = z/\alpha \approx 3.35 \).
Step 5: At \( \omega = 2 \), lead adds magnitude \( 1/\sqrt{\alpha} = 1/\sqrt{0.355} \approx 1.68 \). To keep \( |L(\omega_{gc})| = 1 \): need overall gain \( K_c \) so that \( K_c \cdot 0.224 \cdot 1.68 = 1 \Rightarrow K_c \approx 2.66 \).
Result: \( C_{\text{lead}}(s) = 2.66 \cdot \frac{s+1.19}{s+3.35} \). The compensated phase margin is approximately \( 26.6° + 28.4° \approx 55° > 50° \).
9.4 Lag Compensator Design
A lag compensator increases the DC gain without significantly affecting the phase near the crossover frequency, thereby reducing steady-state error.
Definition — Lag Compensator. A lag compensator has the form:
\[ C_{\text{lag}}(s) = K_c \frac{s + z}{s + p}, \quad 0 < p < z \]equivalently \( C_{\text{lag}}(s) = K_c \beta \frac{1 + s/z}{1 + s/p} \) where \( \beta = z/p > 1 \). At low frequencies the gain is \( K_c \beta > K_c \), while at high frequencies the gain approaches \( K_c \). The additional low-frequency gain reduces steady-state error by a factor of \( \beta \).
9.4.1 Lag Compensator Design Procedure
The lag compensator is placed so that its zero and pole are both much lower than the gain crossover frequency (typically a decade or more lower), so they contribute negligible phase near \( \omega_{gc} \):
- Design for the desired phase margin using only gain (proportional control): choose \( K \) so that \( PM = PM_{\text{desired}} \).
- If the steady-state error specification is not met, determine the required additional low-frequency gain \( \beta = \) (required gain increase).
- Place the lag zero at \( z = \omega_{gc}/10 \) and the lag pole at \( p = z/\beta \).
Example 9.2 — Lag Compensator for Steady-State Error Reduction.
Plant: \( P(s) = 5/(s+1)^3 \). Current uncompensated gain crossover is at \( \omega_{gc} = 1 \) rad/s with \( PM = 30° \). After adding pure gain \( K = 0.5 \), the gain crossover shifts to give \( PM = 45° \). But the velocity constant is \( K_v = \lim_{s\to 0} sKP(s) = 0 \) (type-0 plant), so steady-state position error is nonzero.
To reduce the position error by a factor of 10 (\( \beta = 10 \)), add a lag compensator with \( z = \omega_{gc}/10 = 0.1 \), \( p = z/10 = 0.01 \). The lag compensator is:
\[ C_{\text{lag}}(s) = \frac{s + 0.1}{s + 0.01} \]This multiplies the low-frequency gain by \( 10 \) while contributing only about \( -5° \) of phase near \( \omega_{gc} \), so \( PM \approx 40° \) — slightly reduced but still acceptable.
9.5 Lead-Lag Compensator
In many practical problems, neither lead nor lag alone is sufficient. A lead-lag compensator combines both:
\[ C_{\text{lead-lag}}(s) = K_c \frac{(s+z_1)(s+z_2)}{(s+p_1)(s+p_2)} \]with \( p_1 < z_1 \) (lag section) and \( z_2 < p_2 \) (lead section). This provides increased low-frequency gain (lag) and increased phase margin (lead) simultaneously.
9.6 Trade-offs Between Bandwidth and Stability Margins
A fundamental limitation of feedback control is captured in the waterbed effect (Bode’s sensitivity integral): for a minimum-phase system with at least two more poles than zeros:
\[ \int_0^\infty \ln|S(j\omega)|\,d\omega = 0 \]where \( S(j\omega) = 1/(1 + L(j\omega)) \) is the sensitivity. This integral constraint means that if the sensitivity is reduced (loop gain increased) over some frequency range, it must increase over another range. Pushing the bandwidth up to track faster references necessarily increases the sensitivity peak at higher frequencies, making the system less robust to high-frequency model uncertainty and noise.
Remark — The Bandwidth-Stability-Robustness Triangle. There are three competing objectives in classical loop shaping: (1) high bandwidth for fast response, (2) high stability margins for robustness, and (3) high loop gain for disturbance rejection and low steady-state error. These three cannot all be maximised simultaneously:
- Increasing gain crossover frequency (bandwidth) while maintaining phase margin is the goal of lead compensation, but plant phase lag typically increases at higher frequencies, making this increasingly difficult.
- Adding integrators (type increase) improves DC disturbance rejection but adds phase lag, reducing phase margin.
- The Bode sensitivity integral forces trade-offs between disturbance rejection at one frequency and sensitivity peaks at another.
Recognising these fundamental limits is crucial for realistic specification setting.
Chapter 10: Beyond Classical Control
10.1 Limitations of Classical Control
Classical feedback control — the material of Chapters 1–9 — is centred on single-input single-output (SISO) linear time-invariant systems with a single control loop. It provides elegant graphical tools (root locus, Bode, Nyquist) and effective controller structures (PID, lead-lag). However, it has inherent limitations:
- SISO only: Real systems often have multiple inputs and multiple outputs (MIMO). Classical methods do not extend gracefully to multi-loop systems due to cross-coupling.
- LTI assumption: Linearisation is valid only locally, near the operating point. Highly nonlinear systems or systems operating over large ranges require nonlinear control.
- Manual tuning: Classical design requires significant engineering judgment and iteration. High-dimensional systems make this infeasible.
- No formal performance optimisation: Classical design meets specifications (phase margin, bandwidth) but does not minimise any cost function.
10.2 State-Space Methods and Modern Control
Modern control theory, developed from the 1960s onward, works directly in the state-space representation and handles MIMO systems systematically.
10.2.1 State Feedback and Pole Placement
With full state access, the control law \( u = -K\mathbf{x} \) (state feedback) places the closed-loop poles at arbitrary locations (provided the system is controllable) by choosing the gain matrix \( K \in \mathbb{R}^{1\times n} \). This generalises the root-locus pole placement of Chapter 6 to MIMO systems.
10.2.2 Linear Quadratic Regulator (LQR)
Rather than prescribing pole locations, LQR minimises an infinite-horizon quadratic cost:
\[ J = \int_0^\infty \left[\mathbf{x}^T Q\,\mathbf{x} + u^T R\,u\right] dt \]where \( Q \succeq 0 \) penalises state deviations and \( R > 0 \) penalises control effort. The optimal control law is linear: \( u^*(t) = -K_{\text{LQR}}\mathbf{x}(t) \), where the gain matrix \( K_{\text{LQR}} = R^{-1}B^T P \) and \( P \) is the solution to the algebraic Riccati equation:
\[ PA + A^T P - PBR^{-1}B^T P + Q = 0 \]LQR provides a principled way to tune the trade-off between fast response (large \( Q \)) and small control effort (large \( R \)). Moreover, LQR-designed systems have guaranteed stability margins: gain margin \( \geq 6 \) dB and phase margin \( \geq 60° \) in the single-input case.
Remark — LQR and Kalman Filters. LQR assumes perfect state knowledge. When the full state is not directly measurable, one pairs LQR with a Kalman filter (optimal state estimator), which estimates the state from noisy measurements. The combined controller (observer + state feedback) is called the Linear Quadratic Gaussian (LQG) regulator. Interestingly, LQG does not inherit the robustness guarantees of LQR — this motivated the development of \( H_\infty \) control in the 1980s.
10.3 Model Predictive Control (MPC)
Model predictive control (MPC), also called receding-horizon control, is now the dominant advanced control strategy in chemical process control and is rapidly growing in automotive and aerospace applications.
At each time step \( t \), MPC:
- Measures (or estimates) the current state \( \mathbf{x}(t) \).
- Solves a finite-horizon optimal control problem over a prediction horizon \( T_p \):
subject to state dynamics, input constraints (\( u_{\min} \leq u \leq u_{\max} \)), and state constraints.
- Applies only the first control input of the optimal trajectory.
- Repeats at the next time step (receding horizon).
The key advantage of MPC is that it handles constraints explicitly and naturally — something classical control cannot do. Its main limitation is computational: the optimisation must be solved in real time at each sample period.
10.4 Nonlinear Control
10.4.1 Feedback Linearisation
For certain classes of nonlinear systems, it is possible to find a change of coordinates and a control law that exactly cancels the nonlinearity, making the closed-loop system behave like a linear system. This approach, called feedback linearisation (or exact linearisation), is conceptually distinct from the approximate linearisation of Chapter 2: it works exactly rather than approximately.
For the single-input nonlinear system \( \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}) + g(\mathbf{x})u \), if a change of variables \( \mathbf{z} = \Phi(\mathbf{x}) \) exists such that the system in \( z \)-coordinates is linear, then a linear control law in \( z \)-coordinates gives exact tracking of arbitrary references. The difficulty is that the required change of variables does not always exist, and even when it does, cancellation of nonlinearities requires exact model knowledge.
10.4.2 Lyapunov-Based Control
Lyapunov stability theory provides a general framework for analysing and designing controllers for nonlinear systems. A function \( V(\mathbf{x}) \) is a Lyapunov function for the system \( \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}) \) if:
- \( V(\mathbf{0}) = 0 \) and \( V(\mathbf{x}) > 0 \) for \( \mathbf{x} \neq 0 \) (positive definite).
- \( \dot{V}(\mathbf{x}) = \nabla V \cdot \mathbf{f}(\mathbf{x}) < 0 \) for \( \mathbf{x} \neq 0 \) (negative definite derivative along trajectories).
The existence of such a \( V \) guarantees asymptotic stability of the origin. For control design, one can choose a desired \( V \) and design \( u \) to make \( \dot{V} < 0 \) — this is called control Lyapunov function (CLF) design.
10.5 Data-Driven and Learning-Based Control
Classical control design requires a mathematical model of the plant. In many modern applications — robotics, autonomous systems, biological systems — deriving accurate first-principles models is difficult or impossible. This has motivated data-driven approaches.
10.5.1 System Identification
System identification uses input-output data to fit a parametric model (e.g., a transfer function of known order, or a state-space model) to the observed behaviour. The identified model can then be used as the plant model for classical or modern control design. Methods include least-squares, prediction error methods (PEM), and subspace identification.
10.5.2 Reinforcement Learning for Control
Reinforcement learning (RL) treats control as a sequential decision problem: an agent interacts with an environment, receives scalar rewards, and learns a policy mapping observations to actions that maximises cumulative reward. In continuous control problems, RL algorithms such as Proximal Policy Optimisation (PPO), Soft Actor-Critic (SAC), and TD3 have demonstrated impressive results on robotic manipulation, locomotion, and game-playing.
The connection to control theory is deep: the Bellman optimality equation in RL corresponds to the Hamilton-Jacobi-Bellman (HJB) equation in optimal control theory, and the LQR solution is the exact solution to the RL problem when the system is linear and the reward is quadratic.
Challenges of RL for control include:
- Sample efficiency: Learning requires many interactions with the plant, which is expensive or unsafe for physical systems.
- Safety: RL policies may violate constraints or explore dangerous regions during training.
- Interpretability: Learned neural network policies are difficult to analyse or certify for safety-critical applications.
- Generalisation: Policies trained in simulation may not transfer to the real system (the “sim-to-real gap”).
Current research actively addresses these challenges through safe RL, model-based RL (which learns a model of the environment and plans within it — analogous to MPC), and constrained policy optimisation.
Remark — The Role of Classical Control in the Modern Landscape. Despite the excitement surrounding data-driven and learning-based methods, classical control remains indispensable. The Bode, Nyquist, and root-locus tools provide unmatched insight into loop dynamics and performance trade-offs. PID controllers — tuned by hand or automatically — handle the vast majority of industrial control tasks efficiently and reliably. Modern advanced controllers are almost always layered on top of classical inner loops: for example, an MPC outer loop might generate reference trajectories that are tracked by inner PID loops with classical stability guarantees. A deep understanding of classical control is therefore essential for any control engineer, regardless of whether they ultimately deploy classical, optimal, or learning-based controllers.
Summary of Key Formulas
Closed-loop TF: \( T(s) = \frac{C(s)P(s)}{1 + C(s)P(s)} \)
Sensitivity: \( S(s) = \frac{1}{1 + L(s)} \)
Stability: All poles of \( T(s) \) in open LHP \( \Leftrightarrow \) all roots of \( 1 + L(s) = 0 \) in open LHP.
Routh criterion: First column of Routh array all positive \( \Leftrightarrow \) stable.
Steady-state errors (unity feedback):
\[ e_{\text{ss, step}} = \frac{1}{1 + K_p}, \quad e_{\text{ss, ramp}} = \frac{1}{K_v}, \quad e_{\text{ss, parabola}} = \frac{1}{K_a} \]Second-order specs:
\[ \%OS = 100\,e^{-\pi\zeta/\sqrt{1-\zeta^2}}, \quad T_p = \frac{\pi}{\omega_n\sqrt{1-\zeta^2}}, \quad T_s \approx \frac{4}{\zeta\omega_n} \]Nyquist criterion: \( Z_{\text{RHP}} = P_{\text{RHP}} - N \) (closed-loop stable iff \( Z_{\text{RHP}} = 0 \)).
Phase margin: \( PM = 180° + \angle L(j\omega_{gc}) \)
Gain margin: \( GM = 1/|L(j\omega_{pc})| \)
Lead compensator max phase: \( \phi_{\text{max}} = \arcsin\frac{1-\alpha}{1+\alpha} \) at \( \omega_{\text{max}} = \sqrt{zp} \)
LQR Riccati equation: \( PA + A^TP - PBR^{-1}B^TP + Q = 0 \)