CHE 522: Advanced Process Dynamics and Control

Estimated study time: 46 minutes

Table of contents

Sources and References

  • Seborg, Edgar, Mellichamp, and Doyle, Process Dynamics and Control
  • Ogunnaike and Ray, Process Dynamics, Modeling, and Control
  • Marlin, Process Control: Designing Processes and Control Systems for Dynamic Performance
  • Skogestad and Postlethwaite, Multivariable Feedback Control: Analysis and Design
  • Astrom and Wittenmark, Computer-Controlled Systems: Theory and Design
  • Astrom and Hagglund, Advanced PID Control
  • Rawlings, Mayne, and Diehl, Model Predictive Control: Theory, Computation, and Design
  • Khalil, Nonlinear Systems
  • MIT OpenCourseWare 10.450, Process Dynamics, Operations, and Control
  • Stanford CHEMENG 170, Process Dynamics and Control

CHE 522 builds upon introductory process control by extending the toolkit from single-loop, continuous-time, single-input single-output (SISO) regulation to the modern arsenal needed for multivariable, sampled-data, constrained, and model-based control of chemical plants. The course threads three recurring questions through every topic. First, how do we represent a chemical process mathematically in a form suitable for controller synthesis? Second, how do we analyse what that representation tells us about achievable performance, stability margins, and interaction between loops? Third, how do we implement the resulting controller on a digital computer that sees the plant only at discrete sampling instants, through imperfect sensors, and subject to actuator limits?

The notes below follow a path from transfer-function review, through state-space modelling, to discrete-time and multivariable methods, ending with model predictive control, state estimation, and nonlinear techniques. Each chapter presents the governing mathematics, the reasoning behind the technique, and a chemical-engineering example that exposes both the power and the pitfalls.

Chapter 1: Transfer Functions and the Laplace Domain Revisited

1.1 Linearisation around a steady state

Chemical processes are almost always nonlinear. Reaction rates depend on temperature through an Arrhenius law, vapour pressures depend on temperature through the Antoine equation, and mass balances over non-isothermal units couple species and energy equations. Yet the bedrock of classical control is the linear time-invariant (LTI) model. The reconciliation is linearisation: expand the nonlinear ODE \( \dot{x} = f(x,u) \) about a nominal operating point \( (x_s, u_s) \) satisfying \( f(x_s, u_s) = 0 \) and retain only first-order terms. Writing \( \tilde{x} = x - x_s \) and \( \tilde{u} = u - u_s \),

\[ \dot{\tilde{x}} = A \tilde{x} + B \tilde{u}, \quad A = \left. \frac{\partial f}{\partial x} \right|_s, \quad B = \left. \frac{\partial f}{\partial u} \right|_s. \]

The Jacobians \( A \) and \( B \) capture the local slope of the vector field. Validity is limited to small excursions, which is often enough because a well-tuned regulator keeps the process near its setpoint. When disturbances push the plant far from the operating point, we revisit linearisation around a new anchor or switch to gain scheduling (Chapter 14).

1.2 From ODEs to transfer functions

Taking the Laplace transform of the linearised state equation with zero initial deviations and an output equation \( y = C x + D u \) gives

\[ Y(s) = \left[ C \left( sI - A \right)^{-1} B + D \right] U(s) = G(s) U(s). \]

The matrix \( G(s) \) is the transfer function matrix. For SISO systems it reduces to a rational function \( G(s) = N(s)/D(s) \) whose poles are the eigenvalues of \( A \) and whose zeros are the roots of \( \det(sI - A) N(s) \) structure. Poles govern natural response modes; zeros govern which modes are observable from a given input and how zero-mean energy can cancel exogenous signals. Right-half-plane (RHP) zeros, which appear for instance in boiler drum level control and in some series reactors, impose a fundamental performance limit by causing inverse response.

1.3 Standard low-order models

Three canonical forms dominate identification practice:

ModelTransfer functionTypical process
First-order plus dead time (FOPDT)\( \frac{K e^{-\theta s}}{\tau s + 1} \)Liquid surge tank, well-mixed heater
Second-order overdamped\( \frac{K}{(\tau_1 s + 1)(\tau_2 s + 1)} \)Jacketed CSTR with thermal lag
Integrator plus dead time\( \frac{K e^{-\theta s}}{s} \)Level, inventory, pressure in closed vessels

The FOPDT approximation is the workhorse of PID tuning because its three parameters \( (K, \tau, \theta) \) can be fit from a single open-loop step test and plug directly into every standard tuning rule.

Chapter 2: State-Space Representation of Chemical Processes

2.1 The state concept

The state of a dynamic system at time \( t_0 \) is the minimum information required, together with future inputs, to determine the future trajectory. For a continuous stirred tank reactor (CSTR) with jacket, the natural state is \( x = (C_A, T, T_j) \): one component concentration, one reactor temperature, one jacket temperature. For a binary distillation column with \( N \) stages, the state has dimension \( 2N \) (liquid compositions and stage holdups, or compositions and tray enthalpies). The state vector is not unique; any invertible coordinate transformation \( z = T x \) yields an equivalent realisation with matrices \( (T A T^{-1}, T B, C T^{-1}, D) \).

2.2 Deriving the state-space model

Consider a non-isothermal CSTR with a first-order exothermic reaction \( A \to B \). Mass and energy balances give

\[ \frac{dC_A}{dt} = \frac{F}{V}\left(C_{A,in} - C_A\right) - k_0 e^{-E/RT} C_A, \]\[ \frac{dT}{dt} = \frac{F}{V}\left(T_{in} - T\right) + \frac{\left(-\Delta H\right)}{\rho c_p} k_0 e^{-E/RT} C_A - \frac{U A}{\rho c_p V}\left(T - T_j\right). \]

Linearising around a steady state yields a \( 2 \times 2 \) Jacobian \( A \) whose sign pattern reveals positive feedback from temperature to reaction rate through the Arrhenius term. This is the classic mechanism behind CSTR multiplicity and runaway, and state-space analysis exposes it cleanly through the eigenvalues of \( A \).

2.3 Canonical forms

Three realisations recur:

  • Controllable canonical form places the \( b \) vector and \( A \) matrix in companion form, revealing which input channels affect which modes.
  • Observable canonical form is its dual, organised so that only one output row is nonzero.
  • Modal (Jordan) form diagonalises \( A \) so the decoupled scalar equations \( \dot{z}_i = \lambda_i z_i + \tilde{b}_i u \) show each mode in isolation.

The modal form is the conceptual foundation for the controllability and observability tests that follow.

Why state space, not just transfer functions? Transfer functions describe only the input-output map of a zero-initial-condition system. They hide unobservable and uncontrollable modes, cannot easily represent multivariable coupling, and lose information about internal states such as a reactor hot-spot temperature. State space retains all internal modes and scales gracefully to MIMO problems.

Chapter 3: Stability, Lyapunov Theory, and Eigenvalues

3.1 Linear stability via eigenvalues

A linear system \( \dot{x} = A x \) is asymptotically stable if and only if every eigenvalue of \( A \) has strictly negative real part. Discrete systems \( x_{k+1} = \Phi x_k \) require every eigenvalue of \( \Phi \) to lie strictly inside the unit circle. The mapping \( z = e^{sT_s} \) with sample interval \( T_s \) sends the continuous stable half-plane to the discrete unit disc and explains why fast continuous dynamics appear near \( z = 0 \) while slow ones crowd near \( z = 1 \).

3.2 Lyapunov’s direct method

For nonlinear systems eigenvalues apply only after linearisation. Lyapunov’s direct method avoids solving the ODE by searching for a scalar energy-like function \( V(x) \) with \( V(0) = 0 \), \( V(x) > 0 \) elsewhere, and \( \dot{V}(x) \leq 0 \) along trajectories. Existence of such a \( V \) certifies stability of the origin; strict negativity of \( \dot{V} \) away from the origin proves asymptotic stability. For linear systems the search reduces to solving the Lyapunov equation

\[ A^T P + P A = -Q, \quad Q \succ 0, \]

for a positive-definite \( P \). A solution exists if and only if \( A \) is Hurwitz. This algebraic test underlies robust-control derivations and, in discrete form \( \Phi^T P \Phi - P = -Q \), underpins stability certificates for sampled-data controllers.

3.3 Input-output stability

Bounded-input bounded-output (BIBO) stability asks whether every bounded \( u \) produces a bounded \( y \). For LTI systems BIBO is equivalent to all poles of \( G(s) \) lying in the open left half-plane, but only if no pole-zero cancellation masks unstable internal modes. Internal stability, guaranteed by state-space eigenvalue analysis, is therefore the stronger and more relevant notion for plant safety.

Chapter 4: Controllability and Observability

4.1 Definitions

A pair \( (A, B) \) is controllable if, for every initial state \( x_0 \) and target state \( x_f \), an input \( u(t) \) exists that drives \( x_0 \) to \( x_f \) in finite time. A pair \( (A, C) \) is observable if the initial state can be recovered uniquely from a finite record of outputs and inputs.

4.2 Algebraic tests

The controllability matrix

\[ \mathcal{C} = \left[ B \; A B \; A^2 B \; \cdots \; A^{n-1} B \right] \]

must have full row rank \( n \). The observability matrix

\[ \mathcal{O} = \left[ C^T \; A^T C^T \; \cdots \; \left(A^T\right)^{n-1} C^T \right]^T \]

must have full column rank. Loss of rank identifies unreachable or invisible modes.

4.3 Practical significance in chemical plants

A distillation column whose only manipulated variable is reflux cannot independently control both top and bottom compositions: the column is uncontrollable with respect to a two-dimensional composition target. Adding reboiler duty restores controllability. Similarly, a reactor whose only measurement is outlet temperature may leave inlet concentration disturbances invisible, making the system unobservable from that sensor set. Controllability and observability thus guide the design of actuator and sensor layouts before any control law is written.

Structural versus numerical tests: the rank tests above are numerically fragile. In practice engineers use the singular-value-based Popov-Belevitch-Hautus (PBH) test or Hankel singular values, which grade modes by how controllable and observable they are, not just whether they are.

Chapter 5: PID Control and Tuning

5.1 The PID law and its variants

The ideal PID law is

\[ u(t) = K_c \left[ e(t) + \frac{1}{\tau_I} \int_0^t e(\tau) d\tau + \tau_D \frac{d e(t)}{dt} \right], \]

with \( e = r - y \). In practice three modifications are essential: derivative filtering to tame measurement noise, derivative on measurement rather than error to avoid setpoint kicks, and output clamping with anti-windup to handle actuator limits. The resulting industrial form,

\[ u(s) = K_c \left[ b \, r(s) - y(s) \right] + \frac{K_c}{\tau_I s}\left[ r(s) - y(s) \right] - \frac{K_c \tau_D s}{\alpha \tau_D s + 1} y(s), \]

introduces setpoint weight \( b \) and a derivative filter fraction \( \alpha \).

5.2 Ziegler-Nichols and relay autotuning

Ziegler and Nichols proposed two classical rules. The open-loop rule uses the FOPDT parameters \( (K, \tau, \theta) \) of a step response:

Controller\( K_c \)\( \tau_I \)\( \tau_D \)
P\( \tau / (K \theta) \)
PI\( 0.9 \tau / (K \theta) \)\( 3.33 \theta \)
PID\( 1.2 \tau / (K \theta) \)\( 2 \theta \)\( 0.5 \theta \)

The closed-loop rule pushes pure-proportional gain until sustained oscillation at gain \( K_u \) and period \( P_u \), then sets \( K_c = 0.6 K_u \), \( \tau_I = 0.5 P_u \), \( \tau_D = 0.125 P_u \). Astrom’s relay-feedback autotuner replaces the destructive sustained-oscillation test with a bounded limit-cycle obtained by substituting a relay for the controller, extracting \( K_u \) and \( P_u \) from the describing-function approximation.

5.3 Internal model control tuning

Internal model control (IMC) assumes a model \( \tilde{G}(s) \) and designs the controller so that the closed loop behaves like a chosen reference trajectory \( G_r(s) = 1/(\tau_c s + 1)^r \). For a factorable plant \( \tilde{G} = \tilde{G}_+ \tilde{G}_- \), where \( \tilde{G}_+ \) contains RHP zeros and delays, the IMC controller is

\[ q(s) = \tilde{G}_-^{-1}(s) \, f(s), \quad f(s) = \frac{1}{\left(\tau_c s + 1\right)^r}, \]

producing PID settings that depend transparently on the single tuning parameter \( \tau_c \). Smaller \( \tau_c \) yields faster response at the cost of robustness; larger \( \tau_c \) yields sluggish but forgiving behaviour. The IMC connection explains why well-designed PIDs already embed a model of the process.

Chapter 6: Frequency Response, Bode, and Nyquist

6.1 Reading a transfer function as a filter

Evaluating \( G(j\omega) \) for real \( \omega \) gives the steady-state response to a sinusoid. The magnitude \( |G(j\omega)| \) and phase \( \angle G(j\omega) \) trace out Bode plots that expose bandwidth, roll-off, and delay contributions at a glance. Delays contribute linear phase \( -\omega \theta \) with no magnitude change, which is why dead time is the universal destroyer of closed-loop bandwidth.

6.2 Nyquist criterion and margins

The Nyquist criterion counts encirclements of \( -1 \) by the open-loop locus \( L(j\omega) = G(j\omega) K(j\omega) \) to certify closed-loop stability. Gain margin (GM) and phase margin (PM) quantify how far \( L(j\omega) \) passes from \( -1 \):

\[ GM = \frac{1}{|L(j\omega_{180})|}, \quad PM = 180^\circ + \angle L(j\omega_{gc}). \]

Rules of thumb call for \( GM \geq 2 \) and \( PM \geq 30^\circ \), though chemical plants often demand more. The sensitivity peak \( M_s = \max_\omega |1/(1+L(j\omega))| \) is the most complete single-number robustness metric, with \( M_s \leq 2 \) considered safe.

6.3 Fundamental limits

Bode’s integral theorem states that, for any stable closed loop with relative degree \( \geq 2 \),

\[ \int_0^\infty \ln \left| S(j\omega) \right| d\omega = 0, \]

where \( S = 1/(1 + L) \) is sensitivity. Any bandwidth where \( |S| < 1 \) must be paid for by a bandwidth where \( |S| > 1 \) — the so-called waterbed effect. RHP zeros and time delays tighten this constraint, explaining why some control problems cannot be solved by any controller, only attenuated.

Chapter 7: Discrete-Time Systems and the z-Transform

7.1 Why discrete

Every modern controller lives in a digital computer. It reads sensors through an analog-to-digital converter at sampling period \( T_s \), computes a command, and writes it through a digital-to-analog converter held constant over the next interval by a zero-order hold (ZOH). The continuous plant, discrete controller, and hold together form a sampled-data system whose analysis requires discrete-time mathematics.

7.2 The z-transform

For a sequence \( \{x_k\} \), the z-transform is

\[ X(z) = \sum_{k=0}^\infty x_k z^{-k}. \]

The shift operator \( z \) plays the role of \( s \): multiplication by \( z \) advances the sequence one step, multiplication by \( z^{-1} \) delays it. Standard pairs include \( \{1\} \leftrightarrow z/(z-1) \), \( \{a^k\} \leftrightarrow z/(z-a) \), and \( \{k a^k\} \leftrightarrow a z/(z-a)^2 \). The region of convergence pins down which sequence a given \( X(z) \) represents.

7.3 Pulse transfer function

For a continuous plant \( G(s) \) driven by a ZOH, the pulse transfer function is

\[ G(z) = \left(1 - z^{-1}\right) \mathcal{Z}\left\{ \mathcal{L}^{-1}\left[ \frac{G(s)}{s}\right] \right\}, \]

the z-transform of the step response sampled at \( T_s \), scaled by \( (1 - z^{-1}) \) to convert steps to impulses. For a first-order plant \( G(s) = K/(\tau s + 1) \),

\[ G(z) = \frac{K\left(1 - a\right)}{z - a}, \quad a = e^{-T_s/\tau}. \]

The discrete pole \( a \) sits between 0 and 1; as \( T_s \to 0 \), \( a \to 1 \) and the discrete system approaches its continuous counterpart.

Chapter 8: Sampling, Aliasing, and Anti-Alias Filtering

8.1 Shannon’s theorem

If a continuous signal is band-limited to \( \omega_N \), it can be recovered exactly from samples taken at any rate \( \omega_s > 2 \omega_N \) (the Nyquist rate). Any frequency component above \( \omega_s / 2 \) folds — aliases — into a lower apparent frequency, appearing in the controller as a phantom disturbance. Chemical processes rarely contain high-frequency content in the variables being regulated, but measurement noise does, and aliased noise corrupts the feedback signal.

8.2 Anti-alias filters

An analog low-pass filter with cutoff \( \omega_c \ll \omega_s / 2 \) must precede every ADC. A common choice is a fourth-order Butterworth at \( \omega_c = \omega_s / 5 \). Digital filtering after sampling cannot undo aliasing; the damage is done at the ADC.

8.3 Choosing the sampling period

Heuristics for \( T_s \) trade off controller bandwidth, noise rejection, and computational load. Seborg suggests \( T_s \approx 0.1 \tau_{dom} \) for first-order-dominant processes or \( T_s \approx 0.05 \) to \( 0.1 \) times the closed-loop time constant. For flow loops this is milliseconds; for distillation compositions it may be minutes. Sampling too fast wastes computation and amplifies noise derivative action; sampling too slowly loses phase margin and may alias disturbances.

Chapter 9: Digital Implementation of PID

9.1 Position and velocity forms

Direct discretisation of the ideal PID yields the position form

\[ u_k = K_c e_k + \frac{K_c T_s}{\tau_I} \sum_{i=0}^k e_i + \frac{K_c \tau_D}{T_s}\left(e_k - e_{k-1}\right). \]

The integral sum grows without bound when the error has a bias, which is the classical windup scenario. The velocity form avoids this by computing the incremental command

\[ \Delta u_k = u_k - u_{k-1} = K_c \left(e_k - e_{k-1}\right) + \frac{K_c T_s}{\tau_I} e_k + \frac{K_c \tau_D}{T_s}\left(e_k - 2 e_{k-1} + e_{k-2}\right), \]

which the actuator integrates naturally. Velocity form is automatically bumpless on manual-to-auto transfer and is the default in DCS vendor implementations.

9.2 Anti-windup

Even with velocity form, an integral that keeps growing while a valve is saturated causes a long recovery once the valve returns to its linear range. Back-calculation anti-windup feeds the saturated minus unsaturated command through a gain \( 1/\tau_t \) into the integrator:

\[ \frac{d I}{dt} = \frac{K_c}{\tau_I} e + \frac{1}{\tau_t}\left(u_{sat} - u\right). \]

Conditional integration, which simply freezes the integrator when the output is saturated and the error would drive it further into saturation, is equally popular and even simpler to code.

9.3 Bumpless transfer and set-point weighting

Set-point weights \( b \) on the proportional term and \( c \) on the derivative term mute the jolt that follows a setpoint step. Typical values are \( b \approx 0.3 \) and \( c = 0 \), giving derivative action only on measurement. Bumpless transfer re-initialises the integrator on any mode change so that the output does not jump.

Chapter 10: Multivariable Control, Decoupling, and the RGA

10.1 Interaction in MIMO systems

A distillation column with reflux \( R \) and reboiler duty \( Q_B \) as inputs and top composition \( x_D \) and bottom composition \( x_B \) as outputs is a \( 2 \times 2 \) MIMO plant. Adjusting \( R \) to correct \( x_D \) disturbs \( x_B \); adjusting \( Q_B \) to correct \( x_B \) disturbs \( x_D \). Naively closing two SISO loops on the diagonal pairing may produce a stable but sluggish closed loop or, worse, instability caused by interaction between loops.

10.2 The relative gain array

Bristol’s relative gain array (RGA) measures interaction under perfect control. For a steady-state gain matrix \( K = G(0) \), the RGA is

\[ \Lambda = K \odot \left(K^{-1}\right)^T, \]

where \( \odot \) denotes element-wise multiplication. Each entry \( \lambda_{ij} \) is the ratio of open-loop gain from \( u_j \) to \( y_i \) when all other loops are open, to the same gain when all other loops are closed perfectly. Desirable pairings have \( \lambda_{ij} \) near 1; negative entries flag catastrophic pairings that change sign on loop closure and should never be used.

10.3 Decoupling

Dynamic decoupling seeks a precompensator \( W(s) \) such that \( G(s) W(s) \) is diagonal. Ideal decoupling requires \( W = G^{-1} D \) for some diagonal \( D \), which is impractical if \( G \) has RHP zeros or delays. Simplified decoupling sets off-diagonal elements of \( W \) to cancel only the dominant cross-channel dynamics. Static decoupling uses \( W = K^{-1} \) to diagonalise only at steady state — crude but robust, and remarkably common in industrial distillation practice.

10.4 Singular value analysis

The singular value decomposition \( G(j\omega) = U(j\omega) \Sigma(j\omega) V(j\omega)^H \) exposes gain directionality. The largest singular value \( \bar{\sigma} \) and smallest \( \underline{\sigma} \) bracket the plant gain; their ratio, the condition number \( \gamma = \bar{\sigma}/\underline{\sigma} \), warns of ill-conditioning. Columns with very different condition numbers imply that certain input directions produce enormous outputs while orthogonal directions produce almost none, a scenario that is difficult for any controller and dangerous for model-based designs.

Chapter 11: Computer Control Architecture

11.1 Layers of control

Modern plants organise control in a hierarchy. At the bottom, regulatory PID loops run at 0.1-1 s sampling on distributed control system (DCS) hardware. Above that, advanced regulatory control handles feedforward, cascade, override, and ratio logic. Above that, model predictive controllers run every minute on dedicated servers, using steady-state optimisers above them to compute economically optimal setpoints. A real-time optimisation layer updates those targets every few hours from plant-wide economics. Each layer is slower and more sophisticated than the one below.

11.2 Signal integrity

Industrial signals propagate over 4-20 mA current loops chosen because wire resistance does not corrupt the reading and a zero current indicates a broken wire rather than a zero value. Smart transmitters add HART or fieldbus protocols that carry diagnostic information alongside the analog value. Sampling, quantisation, and transport delays all appear in the discrete plant model and must be accounted for in tuning.

11.3 Safety instrumented systems

Safety instrumented systems (SIS) are deliberately independent of the basic process control system (BPCS). The IEC 61511 standard codifies safety integrity levels and dictates that control and shutdown logic not share sensors, logic solvers, or final elements whenever possible. From a control-theoretic perspective, SIS is not a controller in the feedback sense but a high-priority override that trips the plant to a safe state on pre-defined conditions.

Chapter 12: Closed-Loop Analysis and Performance

12.1 The four sensitivities

A unity-feedback loop with plant \( G \), controller \( K \), reference \( r \), output disturbance \( d_o \), input disturbance \( d_i \), and measurement noise \( n \) has four fundamental transfer functions:

\[ S = \frac{1}{1 + G K}, \quad T = \frac{G K}{1 + G K}, \quad S G = \frac{G}{1 + G K}, \quad K S = \frac{K}{1 + G K}. \]

Sensitivity \( S \) governs disturbance rejection and setpoint error; complementary sensitivity \( T \) governs reference tracking and noise rejection; \( S G \) governs input-disturbance rejection; \( K S \) governs control effort. Design specifications translate to magnitude bounds on each.

12.2 The algebraic constraint

At every frequency \( S(j\omega) + T(j\omega) = 1 \). One cannot be made small without the other being close to 1. Disturbance rejection at a frequency requires \( |S| \ll 1 \), hence \( |T| \approx 1 \), hence measurement noise at that frequency passes straight through. This trade-off is inescapable and drives the choice of loop bandwidth.

12.3 Performance metrics

Common time-domain metrics include rise time, settling time, overshoot, and integral of absolute error (IAE) or squared error (ISE). Frequency-domain metrics include the sensitivity peak \( M_s \), the complementary peak \( M_t \), and the bandwidth \( \omega_{bw} \). Economic performance metrics convert these into dollars: variance of a controlled variable times its marginal value is the cost of poor control.

Chapter 13: Model Predictive Control

13.1 The receding horizon principle

Model predictive control (MPC) solves an optimisation problem at every sample instant to decide what to do over the next \( N \) steps, implements only the first step, shifts the horizon forward by one, and repeats. The resulting feedback is implicit in the re-solution: any deviation from the predicted trajectory enters as a new initial condition next step. MPC dominates modern refining and petrochemical control because it handles multivariable coupling and actuator constraints gracefully, features that defeat classical PID.

13.2 Quadratic program formulation

A linear MPC with state-space model \( x_{k+1} = A x_k + B u_k \), output \( y_k = C x_k \), and targets \( r_k \) minimises

\[ \min_{u_0, \ldots, u_{N-1}} \sum_{k=0}^{N-1} \left( y_k - r_k \right)^T Q \left( y_k - r_k \right) + \Delta u_k^T R \Delta u_k \]

subject to \( u_{min} \leq u_k \leq u_{max} \), \( \Delta u_{min} \leq \Delta u_k \leq \Delta u_{max} \), and \( y_{min} \leq y_k \leq y_{max} \). Because the state evolution is linear in \( u \), predictions \( y_k \) are linear in the decision vector, and both objective and constraints are quadratic and linear respectively. The problem is a convex quadratic program (QP) solvable in milliseconds even for hundreds of variables.

13.3 Tuning knobs

MPC tuning parameters are the prediction horizon \( N \), the control horizon \( M \leq N \), the weights \( Q \) and \( R \), and the move-suppression matrix on \( \Delta u \). Long horizons improve foresight but grow the QP; short horizons risk shortsighted decisions that look good now and terrible later. The weights \( Q \) and \( R \) set the compromise between tracking and aggression, and move suppression prevents the controller from chattering.

13.4 Stability and feasibility

Unlike PID, MPC has no canonical stability proof; each formulation must be analysed individually. Terminal cost and terminal set techniques add a Lyapunov-like terminal penalty \( V_f(x_N) \) and a constraint \( x_N \in X_f \), producing closed-loop stability guarantees when the terminal pair is compatible with an LQR or other stabilising law. Recursive feasibility — the property that a feasible QP today remains feasible tomorrow in the worst case — requires careful handling of disturbances, often via soft constraints with penalty slacks.

Why MPC took over refining: a refinery crude unit has dozens of manipulated variables (pump-around flows, heater duties, sidestream draws), dozens of controlled variables (product cut points, column differential pressures, heater outlet temperatures), and hard constraints (flooding limits, metallurgical limits, downstream feed specs). Any classical architecture requires bespoke cascades, overrides, and decouplers. MPC replaces that tangle with a single QP and lets the optimiser discover feasible operation at every instant.

Chapter 14: State Estimation and the Kalman Filter

14.1 Why estimation

Most states in a chemical plant are not measured. Reactor concentrations, tray compositions, catalyst activity, and heat transfer coefficients must be inferred from available measurements. MPC and any state-feedback controller need estimates of the unmeasured states; the classical approach is an observer.

14.2 The Luenberger observer

For a deterministic system \( x_{k+1} = A x_k + B u_k \), \( y_k = C x_k \), a full-order observer is

\[ \hat{x}_{k+1} = A \hat{x}_k + B u_k + L\left(y_k - C \hat{x}_k\right). \]

The observer error \( \tilde{x}_k = x_k - \hat{x}_k \) evolves as \( \tilde{x}_{k+1} = (A - L C) \tilde{x}_k \). Choosing \( L \) by pole placement puts the error dynamics wherever we like, subject to observability. The separation principle guarantees that combining an observer with a state-feedback gain produces a stable closed loop as long as each design is stable individually — a deep result that underlies every modern multivariable controller.

14.3 The Kalman filter

When process and measurement noise are modelled as zero-mean white sequences with covariances \( Q \) and \( R \), the Kalman filter computes the gain \( L \) that minimises the steady-state error covariance. The recursive form separates prediction

\[ \hat{x}_{k|k-1} = A \hat{x}_{k-1|k-1} + B u_{k-1}, \quad P_{k|k-1} = A P_{k-1|k-1} A^T + Q, \]

from update

\[ L_k = P_{k|k-1} C^T \left[ C P_{k|k-1} C^T + R \right]^{-1}, \]\[ \hat{x}_{k|k} = \hat{x}_{k|k-1} + L_k\left(y_k - C \hat{x}_{k|k-1}\right), \quad P_{k|k} = \left(I - L_k C\right) P_{k|k-1}. \]

Steady-state \( P \) satisfies the discrete algebraic Riccati equation and gives a constant gain \( L_\infty \), the form usually deployed because time-varying gains add software complexity for marginal benefit.

14.4 Extended and unscented filters

For nonlinear plants \( x_{k+1} = f(x_k, u_k) + w_k \), the extended Kalman filter (EKF) linearises around the current estimate and applies the linear filter equations. The unscented Kalman filter (UKF) propagates a small set of deterministically chosen sigma points through the nonlinear dynamics and reconstructs mean and covariance from them, avoiding Jacobians and handling highly nonlinear plants more accurately. Both are heavily used in reactor and distillation estimation.

Chapter 15: Control of Distillation Columns

15.1 Energy and material balance structure

A binary distillation column is controlled by four actuators (condenser duty, reboiler duty, reflux flow, and bottoms flow) and constrained by two holdups (reflux drum and sump), two pressures (column and condenser), and two compositions (distillate and bottom). Standard configurations pair level and pressure loops with one actuator each, leaving two actuators for composition. The choice of composition configuration — LV, DV, LB, or DB — determines which actuator pair controls which composition and profoundly affects loop interaction.

15.2 The LV configuration

In the LV configuration, reflux \( L \) controls distillate composition and boilup \( V \) controls bottom composition. Levels are on \( D \) (distillate flow) and \( B \) (bottom flow). The RGA for LV at typical operating points is close to the identity for high-purity columns, justifying diagonal PID tuning. For low-purity columns, strong interaction calls for decoupling or MPC.

15.3 Inferential control

Online composition analysers are slow (minutes) and expensive. A tray temperature correlates with composition in a pure binary and can be used as an inferred control variable with a fast dynamic response, switching to analyser trim periodically. Inferential MPC extends this with a Kalman filter that fuses multiple temperatures, feed composition estimates, and intermittent analyser readings to produce a continuous composition estimate at every sample.

Chapter 16: Control of Reactors

16.1 Exothermic CSTR stability

A CSTR with exothermic reaction has a heat-generation curve \( Q_g(T) \) rising as Arrhenius in \( T \) and a heat-removal line \( Q_r(T) \) linear in \( T \) (through the jacket). Intersections are steady states; the stability of each is determined by the slope condition \( dQ_r/dT > dQ_g/dT \). A stable low-conversion steady state, an unstable middle steady state, and a stable high-conversion steady state are the classic three-equilibrium pattern. Control aims to hold operation at the ignited high-conversion point while rejecting feed-composition and jacket-temperature disturbances that would otherwise quench the reactor.

16.2 Cascade control

Reactor temperature is typically the master loop; jacket temperature is the slave. The fast inner loop rejects jacket-side disturbances (cooling water pressure, ambient temperature) before they reach the reactor. The master loop corrects for reaction-side disturbances by resetting the jacket setpoint. Cascade is the workhorse architecture for any process with a fast inner variable and a slow outer variable.

16.3 Batch reactor control

Batch reactors lack steady states by definition. Control objectives are temperature trajectory tracking, end-point conversion, and safety (avoiding runaway). Trajectory optimisation off-line produces a reference path; online MPC or gain-scheduled PID tracks it. Safety constraints — jacket-temperature limits to avoid thermal shock, pressure limits on the vessel — must appear explicitly in the controller, which is another argument for MPC over PID.

Chapter 17: Nonlinear Process Control

17.1 Gain scheduling

Gain scheduling is the pragmatic approach to nonlinear control. The operating envelope is partitioned into regions; a linear controller is designed for each region; a scheduler interpolates between them as a function of a measured scheduling variable (typically an operating condition such as feed rate or composition). Gain scheduling handles CSTR nonlinearity, column composition dependence, and varying product grades in polymer reactors. Care must be taken during transitions: hidden coupling between the plant and the scheduler can destabilise a technically well-tuned family of local controllers.

17.2 Feedback linearisation

For a control-affine system \( \dot{x} = f(x) + g(x) u \) with output \( y = h(x) \), feedback linearisation seeks a coordinate change and input transformation that render the input-output map linear. For relative degree \( r \), repeated differentiation of \( y \) along trajectories produces

\[ y^{(r)} = L_f^r h(x) + L_g L_f^{r-1} h(x) \, u, \]

so choosing

\[ u = \frac{v - L_f^r h(x)}{L_g L_f^{r-1} h(x)} \]

makes \( y^{(r)} = v \), a chain of \( r \) integrators. Linear control design then produces \( v \) from \( y \) and the reference. The method is powerful but brittle: exact cancellation requires an exact model, and internal dynamics (the zero dynamics in the transformed coordinates) must be stable.

17.3 Nonlinear MPC

Nonlinear MPC uses the full nonlinear model inside the MPC optimisation. The resulting problem is a non-convex nonlinear program, solvable but slow. Real-time iteration schemes exploit the warm start provided by the previous solution and use sequential quadratic programming to deliver solutions within a sample interval. Applications include polymerisation reactors, fuel cells, and air-separation plants where nonlinearity is too strong for linear MPC and the computational investment pays off.

Chapter 18: Robustness and Uncertainty

18.1 Sources of uncertainty

Every model is wrong. Uncertainty sources include unmodelled dynamics (fast actuator modes, valve dynamics, sensor filters), parametric variation (catalyst activity, fouling, tray efficiency), neglected nonlinearity (linearisation error away from the design point), and exogenous disturbances. A robust controller must preserve stability and acceptable performance across a specified uncertainty set, not just at the nominal model.

18.2 Multiplicative uncertainty and small-gain

Model the true plant as \( G_p = G (1 + \Delta W) \) with \( |\Delta(j\omega)| \leq 1 \) and \( W(j\omega) \) a weighting function that bounds relative model error. The small-gain theorem guarantees robust stability of the closed loop if and only if

\[ \left| W(j\omega) T(j\omega) \right| < 1 \; \forall \omega. \]

Because \( |T| \) must be small at high frequencies (where model error is large and \( |W| \) grows), this forces closed-loop bandwidth below the frequency where \( |W| = 1 \).

18.3 H-infinity and mu synthesis

H-infinity synthesis treats the controller design as an optimisation that minimises the peak gain of a weighted transfer function, delivering controllers with explicit robustness margins. Structured singular value (mu) synthesis extends this to structured uncertainty (multiple independent blocks). These tools are mature for linear MIMO plants and are used increasingly in aerospace-adjacent fields; in chemicals they appear in critical high-purity or safety-related loops but rarely as everyday tools, where MPC with robust tuning dominates.

Chapter 19: A Brief Survey of Advanced Topics

19.1 Adaptive control

Adaptive controllers adjust their parameters online in response to plant changes. Self-tuning regulators identify a model in real time and re-compute the controller each step; model-reference adaptive control (MRAC) drives the plant output to follow a desired reference model. Adaptive control is powerful where plant parameters drift slowly (catalyst deactivation, heat-exchanger fouling) but requires care: plant-identifier interaction can destabilise the loop in ways that look benign at the component level.

19.2 Economic MPC

Economic MPC replaces the tracking objective with a direct economic cost \( J = \sum_k \ell(x_k, u_k) \) measuring profit, energy consumption, or emissions. The controller maintains feasibility and stability through terminal conditions but no longer tracks a precomputed setpoint; it discovers operating points that maximise economic performance in real time. Industrial uptake is growing, especially in cogeneration and integrated plants with strong price volatility.

19.3 Data-driven and learning control

Reinforcement learning, Gaussian-process modelling, and neural-network controllers have entered the process control literature. Their maturity is uneven: data-driven identification and soft sensing are routine, while closed-loop learning controllers remain research-grade for safety-critical chemical processes. The next decade will likely see hybrid architectures in which a robust model-based outer controller wraps a learning inner loop that handles fine-grained nonlinearity.

19.4 Cyber-physical security

Networked DCS and MPC installations are vulnerable to cyber attacks that manipulate sensor readings, actuator commands, or setpoints. The control-theoretic response includes anomaly detection based on residuals (Kalman innovation sequences), encrypted control where the controller computes on cipher-text, and moving-target defences that randomise sampling or setpoint trajectories. Security has become a first-class concern in control system design for critical infrastructure.

Back to top