AMATH 382: Computational Modelling of Cellular Systems

Brian Ingalls

Estimated study time: 3 hr 20 min

Table of contents

Sources and References

Primary textbook — B. Ingalls, Mathematical Modeling in Systems Biology (MIT Press, 2013; freely available at math.uwaterloo.ca/~bingalls/)

Supplementary texts — U. Alon, An Introduction to Systems Biology: Design Principles of Biological Circuits (2nd ed., CRC Press, 2019); J.D. Murray, Mathematical Biology I: An Introduction (3rd ed., Springer, 2002); J. Keener & J. Sneyd, Mathematical Physiology (Springer, 2009)

Online resources — XPPAUT documentation (math.pitt.edu/~bard/xpp); BioModels Database (ebi.ac.uk/biomodels); Virtual Cell (vcell.org)


Chapter 1: Biochemical Kinetics and Reaction Networks

1.1 Mass Action Kinetics

Living cells are dynamic chemical systems — a human cell contains thousands of distinct molecular species participating in a web of interacting reactions. To reason quantitatively about cell behaviour, we model these reactions using the framework of chemical kinetics. The fundamental postulate of chemical kinetics is the law of mass action: the rate of an elementary reaction is proportional to the product of the concentrations of its reactants.

For a bimolecular reaction

\[ A + B \xrightarrow{k} C, \]

the reaction rate is \(v = k[A][B]\), where \(k\) is the rate constant (units depend on the order of the reaction). For a first-order, unimolecular reaction \(A \xrightarrow{k} B\), the rate is simply \(v = k[A]\). The intuition is elementary: two molecules react only when they collide, and the collision frequency is proportional to the product of their concentrations.

Definition (Mass Action Kinetics): For an elementary reaction involving reactants with concentrations X1, X2, …, the reaction rate is \[ v = k \prod_{i} X_i^{n_i}, \]

where ni is the stoichiometric coefficient of species i on the reactant side, and k is the rate constant.

1.2 ODEs from Reaction Networks

Given a set of reactions and their rates, we construct ordinary differential equations (ODEs) for each molecular species by tracking how every reaction contributes to production or consumption. A key application is computing the steady state — the concentration vector at which all rates of change vanish. For a linear pathway with mass action kinetics, this can be done by hand.

Example — ODE system and steady state for a three-reaction linear pathway: Consider the network \[ \emptyset \xrightarrow{k_1} A \xrightarrow{k_2} B \xrightarrow{k_3} \emptyset, \]

where \(k_1\) is a zeroth-order synthesis rate and \(k_2\), \(k_3\) are first-order degradation rate constants. By mass action kinetics, \(v_1 = k_1\), \(v_2 = k_2[A]\), \(v_3 = k_3[B]\). The ODE system is:

\[ \frac{d[A]}{dt} = k_1 - k_2[A], \qquad \frac{d[B]}{dt} = k_2[A] - k_3[B]. \]

Steady state: Setting both derivatives to zero, from the first equation:

\[ k_1 - k_2[A]^* = 0 \implies [A]^* = \frac{k_1}{k_2}. \]

Substituting into the second equation:

\[ k_2 \cdot \frac{k_1}{k_2} - k_3[B]^* = 0 \implies [B]^* = \frac{k_1}{k_3}. \]

Notice that \([B]^*\) depends only on the input rate \(k_1\) and the output rate \(k_3\), not on the intermediate conversion rate \(k_2\). This is a general feature of linear pathways: the steady-state concentration of a species is set by the balance of its input and output, and the intermediate rates determine the dynamics (how fast the system reaches steady state) but not the final concentrations.

Numerical verification: Let \(k_1 = 2\,\mu\text{M/min}\), \(k_2 = 1\,\text{min}^{-1}\), \(k_3 = 0.5\,\text{min}^{-1}\). Then:

\[ [A]^* = \frac{2}{1} = 2\,\mu\text{M}, \qquad [B]^* = \frac{2}{0.5} = 4\,\mu\text{M}. \]

Check: \(v_1 = 2\), \(v_2 = 1 \times 2 = 2\), \(v_3 = 0.5 \times 4 = 2\). All fluxes are equal, confirming that at steady state the network carries a uniform flux of \(2\,\mu\text{M/min}\) through every step.

Consider the reversible reaction

\[ A \underset{k_{-1}}{\stackrel{k_1}{\rightleftharpoons}} B. \]

The forward reaction \(A \to B\) occurs at rate \(k_1[A]\) and the reverse reaction \(B \to A\) at rate \(k_{-1}[B]\). Therefore:

\[ \frac{d[A]}{dt} = -k_1[A] + k_{-1}[B], \qquad \frac{d[B]}{dt} = k_1[A] - k_{-1}[B]. \]

At thermodynamic equilibrium, the net rates vanish: \(k_1[A]^* = k_{-1}[B]^*\), giving the equilibrium constant

\[ K_{eq} = \frac{[B]^*}{[A]^*} = \frac{k_1}{k_{-1}}. \]

This ratio of rate constants encodes the thermodynamic favourability of the reaction. The principle of detailed balance states that at equilibrium each individual reaction must be balanced by its reverse — this connects kinetic parameters to thermodynamics and constrains the parameters of any physically consistent model.

1.3 The Stoichiometric Matrix

For networks with many species and reactions, tracking each ODE separately becomes unwieldy. The stoichiometric matrix \(S\) provides a compact, systematic framework. We define:

  • Each column of \(S\) corresponds to a reaction.
  • Each row of \(S\) corresponds to a molecular species.
  • The entry \(S_{ij}\) equals the net number of molecules of species \(i\) produced per occurrence of reaction \(j\) (negative for consumption, positive for production).

If \(\mathbf{X}\) is the vector of species concentrations and \(\mathbf{v}(\mathbf{X})\) is the vector of reaction rates, then the entire ODE system is compactly expressed as:

\[ \frac{d\mathbf{X}}{dt} = S \cdot \mathbf{v}(\mathbf{X}). \]
Example — Three-reaction network: Consider the network \[ \emptyset \xrightarrow{v_1} A, \quad A \xrightarrow{v_2} B, \quad B \xrightarrow{v_3} \emptyset. \]

With species ordered \((A, B)\) and reactions ordered \((v_1, v_2, v_3)\):

\[ S = \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \end{pmatrix}. \]

The ODE system is then:

\[ \frac{d[A]}{dt} = v_1 - v_2, \qquad \frac{d[B]}{dt} = v_2 - v_3, \]

which matches what one would write by inspection.

1.4 A More Complex Network: Michaelis-Menten via Stoichiometric Matrix

Example — Michaelis–Menten scheme via stoichiometry: Recall the Michaelis–Menten reaction scheme: \[ E + S \underset{k_{-1}}{\stackrel{k_1}{\rightleftharpoons}} ES \xrightarrow{k_2} E + P. \]

We have 4 species \((S, E, ES, P)\) and 3 reactions (forward binding, reverse binding, catalysis). The stoichiometric matrix (rows = species, columns = reactions \(r_1, r_{-1}, r_2\)) is:

\[ S_{stoich} = \begin{pmatrix} -1 & 1 & 0 \\ -1 & 1 & 1 \\ 1 & -1 & -1 \\ 0 & 0 & 1 \end{pmatrix} \begin{array}{l} [S] \\ [E] \\ [ES] \\ [P] \end{array} \]\[ [E] + [ES] = E_{tot} = \text{const}, \]

confirming the enzyme conservation that underlies the QSSA derivation. The stoichiometric matrix thus captures in a single object both the dynamics and the conservation laws of the system.

1.5 Conservation Laws and Moiety Conservation

Many biochemical reaction networks conserve the total amount of certain chemical moieties (structural subgroups), even as individual species interconvert. The systematic enumeration of conservation laws via the left null space of \(S\) is not merely a mathematical convenience — it directly guides model reduction and experimental design. A conserved quantity is both a constraint that reduces the dimensionality of the ODE system and a biological invariant that can be measured to verify model consistency. These conservation laws reduce the effective dimension of the system. Mathematically, a conservation law corresponds to a vector \(\mathbf{c}\) such that \(\mathbf{c}^T S = \mathbf{0}\) — that is, \(\mathbf{c}\) lies in the left null space of \(S\). Then

\[ \frac{d}{dt}\left(\mathbf{c}^T \mathbf{X}\right) = \mathbf{c}^T S \mathbf{v} = 0, \]

so the quantity \(\mathbf{c}^T \mathbf{X}\) is constant in time.

A ubiquitous example is enzyme conservation: a free enzyme \(E\) and its substrate-bound complex \(ES\) together conserve the total enzyme:

\[ [E] + [ES] = E_{tot} = \text{const}. \]

This allows us to eliminate one variable and reduce the system’s dimension. Conservation laws are automatically found by computing the left null space of \(S\), a linear algebra operation readily performed by software.

Definition (Steady State): A steady state (or equilibrium point) of the ODE system \(\dot{\mathbf{X}} = S\,\mathbf{v}(\mathbf{X})\) is a concentration vector \(\mathbf{X}^*\) at which all time derivatives vanish: \[ S\,\mathbf{v}(\mathbf{X}^*) = \mathbf{0}. \] The system, once placed at \(\mathbf{X}^*\), remains there in the absence of perturbations. Stability of the steady state (whether perturbations grow or decay) is determined by the Jacobian at \(\mathbf{X}^*\).
Theorem (Steady-State Characterisation): The steady states of \(\dot{\mathbf{X}} = S\,\mathbf{v}(\mathbf{X})\) are precisely those \(\mathbf{X}^*\) for which the flux vector \(\mathbf{v}(\mathbf{X}^*)\) lies in the null space of \(S\): \(\mathbf{v}(\mathbf{X}^*) \in \ker(S)\). In particular, if \(S\) has full row rank (no redundant species equations), the system has a one-parameter family of steady-state flux distributions forming a linear subspace called the flux cone.

This theorem is the foundation of flux balance analysis: rather than solving nonlinear kinetic equations, one characterises all feasible steady-state fluxes as a cone and optimises within it.

Example — Steady state of the three-reaction network: Return to the network \(\emptyset \xrightarrow{v_1} A\), \(A \xrightarrow{v_2} B\), \(B \xrightarrow{v_3} \emptyset\), with specific rate laws \(v_1 = 2\) (constant influx), \(v_2 = k_2[A]\) with \(k_2 = 0.5\), \(v_3 = k_3[B]\) with \(k_3 = 1\). \[ \frac{d[A]}{dt} = 2 - 0.5[A], \qquad \frac{d[B]}{dt} = 0.5[A] - [B]. \]

Setting both to zero: from the first equation, \([A]^* = 2/0.5 = 4\). Substituting into the second: \([B]^* = 0.5 \times 4 / 1 = 2\). The unique steady state is \([A]^* = 4\,\mu\text{M}\), \([B]^* = 2\,\mu\text{M}\).

Verification: \(v_1 = 2\), \(v_2 = 0.5 \times 4 = 2\), \(v_3 = 1 \times 2 = 2\). All three fluxes are equal at steady state — the system is a flux channel with no accumulation at any node.

Example — Left null space and conservation law: For the stoichiometric matrix \[ S = \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \end{pmatrix}, \] we seek a vector \(\mathbf{c} = (c_1, c_2)^T\) such that \(\mathbf{c}^T S = \mathbf{0}\), i.e., \[ c_1 \begin{pmatrix}1 & -1 & 0\end{pmatrix} + c_2 \begin{pmatrix}0 & 1 & -1\end{pmatrix} = \begin{pmatrix}0 & 0 & 0\end{pmatrix}. \] Column by column: \(c_1 = 0\), \(-c_1 + c_2 = 0\), \(-c_2 = 0\). The only solution is \(\mathbf{c} = \mathbf{0}\) — this network has a trivial left null space, meaning no conservation law of the form \(c_1[A] + c_2[B] = \text{const}\) holds. Indeed, both \([A]\) and \([B]\) change freely since the network has an input (\(v_1\)) and an output (\(v_3\)).

By contrast, for a closed network with only the interconversion \(A \rightleftharpoons B\) (no input or output), the stoichiometric matrix is \(S = \begin{pmatrix}-1 \\ 1\end{pmatrix}\) (one reaction, two species), and \(\mathbf{c} = (1, 1)\) is a left null vector: \(\mathbf{c}^T S = (-1 + 1) = 0\). The conservation law is \([A] + [B] = \text{const}\), which is immediately obvious from the fact that the total amount of \(A\)-and-\(B\)-moiety cannot change.

Remark (Dimension of the left null space): The dimension of the left null space of \(S\) — equivalently, the number of linearly independent vectors \(\mathbf{c}\) with \(\mathbf{c}^T S = \mathbf{0}\) — equals the number of linearly independent conservation laws. Concretely, if \(S\) is \(m \times n\) (m species, n reactions) with rank \(\rho\), then \(\dim(\ker(S^T)) = m - \rho\). Each conservation law reduces the dimension of the ODE system by one: the conserved quantity can be used to eliminate one species, lowering the system from \(m\) to \(m - (m-\rho) = \rho\) independent equations. Finding these conserved quantities is therefore one of the first analytical steps in any model reduction.
Remark (Rank of \(S\) and independent reactions): The rank of the stoichiometric matrix \(S\) equals the number of linearly independent reactions in the network. When two rows of \(S\) are linearly dependent, it means the corresponding species equations are not independent — equivalently, some reactions can be expressed as linear combinations of others. For example, in a pathway of consecutive irreversible steps \(A \to B \to C\), the overall reaction \(A \to C\) is a sum of the two elementary reactions; the net stoichiometry of the pathway is captured by the sum of its columns. Recognising linear dependence among columns of \(S\) reveals that some reactions are composed of others, and the system can be described by a smaller independent basis. This is important for model reduction: rather than tracking every elementary step, one can work with a compressed basis of independent reactions, lowering the dimension of the flux space from \(n\) to \(\text{rank}(S)\).
Example — Conservation law in ATP-coupled reactions (energy coupling): In glycolysis, the phosphofructokinase reaction consumes one ATP (converting it to ADP), while downstream substrate-level phosphorylation at pyruvate kinase produces two ATP molecules. Consider a simplified two-step model with species \([A]\) (ATP) and \([B]\) (ADP), and reactions: \[ \text{rxn 1}: \text{ATP} \to \text{ADP} \quad (k_1, \text{ATP-consuming}), \]\[ \text{rxn 2}: \text{ADP} \to \text{ATP} \quad (k_2, \text{ATP-producing}). \]

The stoichiometric matrix (rows: ATP, ADP; columns: rxn 1, rxn 2) is:

\[ S = \begin{pmatrix} -1 & +1 \\ +1 & -1 \end{pmatrix}. \]

The left null space is spanned by \(\mathbf{c} = (1, 1)^T\) since \(\mathbf{c}^T S = (-1+1, +1-1) = (0, 0)\). Therefore the quantity

\[ [\text{ATP}] + [\text{ADP}] = \text{const} \]

is conserved at all times. This is the conservation of total adenylate — ATP and ADP interconvert but their sum is fixed by the total pool. In glycolysis, this conservation law means that every ATP consumed in one step must be produced somewhere else; the cell cannot create adenylate from nothing, only rechannelling the existing pool. The conservation law vector \(\mathbf{c} = (1,1)\) physically corresponds to counting all adenylate species equally and verifying that the total count is unchanged by each reaction.


Chapter 2: Enzyme Kinetics

2.1 Motivation and the Michaelis–Menten Scheme

Nearly all biochemical reactions in cells are catalysed by enzymes — protein catalysts that dramatically accelerate reaction rates without being consumed. Understanding enzyme kinetics is central to modelling cellular metabolism, signalling, and gene regulation.

The canonical model is the Michaelis–Menten scheme, proposed by Leonor Michaelis and Maud Menten in 1913. A substrate \(S\) binds reversibly to the enzyme \(E\) to form a complex \(ES\), which then releases product \(P\) and regenerates the free enzyme:

\[ E + S \underset{k_{-1}}{\stackrel{k_1}{\rightleftharpoons}} ES \xrightarrow{k_2} E + P. \]

By mass action kinetics, the ODEs for this scheme are:

\[ \frac{d[S]}{dt} = -k_1[E][S] + k_{-1}[ES], \]\[ \frac{d[ES]}{dt} = k_1[E][S] - k_{-1}[ES] - k_2[ES], \]\[ \frac{d[E]}{dt} = -k_1[E][S] + (k_{-1} + k_2)[ES], \]\[ \frac{d[P]}{dt} = k_2[ES]. \]

The enzyme conservation \([E] + [ES] = E_{tot}\) allows us to eliminate \([E]\). This still leaves a nonlinear system, which motivates the key approximation.

2.2 The Quasi-Steady-State Approximation

If the enzyme concentration is much smaller than the substrate concentration (\(E_{tot} \ll [S]\)), the complex \([ES]\) reaches a quasi-steady state rapidly and thereafter tracks the slow variable \([S]\). Setting \(d[ES]/dt \approx 0\):

\[ k_1(E_{tot} - [ES])[S] - (k_{-1} + k_2)[ES] = 0. \]

Solving for \([ES]\):

\[ [ES] = \frac{E_{tot}[S]}{K_M + [S]}, \]

where \(K_M = (k_{-1} + k_2)/k_1\) is the Michaelis constant. The net rate of product formation is then:

\[ v = k_2[ES] = \frac{V_{max}[S]}{K_M + [S]}, \]

with \(V_{max} = k_2 E_{tot}\). This is the celebrated Michaelis–Menten equation.

Definition (Michaelis–Menten Kinetics): The reaction rate for an enzyme-catalysed reaction under the quasi-steady-state approximation is \[ v = \frac{V_{max}[S]}{K_M + [S]}, \]

where Vmax is the maximal rate (achieved when all enzyme is saturated) and KM is the substrate concentration at half-maximal rate: v(KM) = Vmax/2.

The parameter \(K_M\) has an elegant biological interpretation: it measures the enzyme’s affinity for the substrate (lower \(K_M\) means higher affinity). When \([S] \ll K_M\), the rate is approximately linear in \([S]\): \(v \approx (V_{max}/K_M)[S]\), resembling first-order kinetics. When \([S] \gg K_M\), the enzyme is saturated and \(v \approx V_{max}\), giving zeroth-order kinetics.

2.3 Validity of the Quasi-Steady-State Approximation

The QSSA is valid when the complex \([ES]\) equilibrates fast relative to the substrate. A more precise criterion (Segel and Slemrod, 1989) is:

\[ \frac{E_{tot}}{K_M + [S]_0} \ll 1, \]

where \([S]_0\) is the initial substrate concentration. This corrects the naive criterion \(E_{tot} \ll [S]_0\) by including the case when \([S]_0\) is itself small but \(K_M\) is large.

Example — QSSA validity check: Suppose \(E_{tot} = 1\,\mu\text{M}\), \([S]_0 = 10\,\mu\text{M}\), and \(K_M = 5\,\mu\text{M}\). Then: \[ \frac{E_{tot}}{K_M + [S]_0} = \frac{1}{5 + 10} = \frac{1}{15} \approx 0.067 \ll 1. \] The QSSA is valid. Now suppose \(E_{tot} = 3\,\mu\text{M}\), \([S]_0 = 2\,\mu\text{M}\), \(K_M = 0.5\,\mu\text{M}\): \[ \frac{E_{tot}}{K_M + [S]_0} = \frac{3}{0.5 + 2} = \frac{3}{2.5} = 1.2 \not\ll 1. \] The QSSA fails here because the enzyme concentration is comparable to the substrate. In this regime, one must retain the full system of ODEs for \([S]\) and \([ES]\) — the Michaelis–Menten rate equation would give significant errors. Such "high-enzyme" regimes occur in single-cell settings or in assays where enzyme concentration is deliberately elevated to accelerate the reaction.

2.4 Enzyme Inhibition

Inhibitors are molecules that reduce enzyme activity. Their mechanisms fall into two main categories.

Definition (Competitive Inhibition): An inhibitor \(I\) is competitive if it binds reversibly to the free enzyme \(E\) at the substrate binding site, forming an inactive complex \(EI\): \[ E + I \underset{k_{-I}}{\stackrel{k_I}{\rightleftharpoons}} EI, \qquad K_I = k_{-I}/k_I. \] The inhibitor-bound enzyme cannot form \(ES\) or produce product. Under the quasi-steady-state approximation the reaction rate becomes \[ v = \frac{V_{max}[S]}{K_M^{app} + [S]}, \qquad K_M^{app} = K_M\!\left(1 + \frac{[I]}{K_I}\right). \] The apparent Michaelis constant is increased, while \(V_{max}\) is unchanged. At sufficiently high \([S]\), competitive inhibition can be fully overcome.
Definition (Non-competitive Inhibition): An inhibitor \(I\) is non-competitive (or mixed, in the symmetric case) if it binds a site distinct from the active site on both free enzyme \(E\) and the enzyme-substrate complex \(ES\). Neither \(EI\) nor \(ESI\) produces product. The rate becomes \[ v = \frac{V_{max}^{app}[S]}{K_M + [S]}, \qquad V_{max}^{app} = \frac{V_{max}}{1 + [I]/K_I}. \] Here \(V_{max}\) is reduced while \(K_M\) is unchanged. No amount of additional substrate can rescue the inhibited velocity.

The distinction has profound practical implications. Competitive inhibitors (such as many drugs targeting enzyme active sites) can be titrated out by high substrate concentrations — the cell can in principle compensate. Non-competitive inhibitors are more potent regulatory tools because their effect is independent of substrate level.

Example — Competitive inhibition: numerical dose-response comparison: Consider an enzyme with \(K_M = 2\,\text{mM}\), \(V_{max} = 10\,\mu\text{M/s}\), and a competitive inhibitor with \(K_I = 1\,\text{mM}\) present at concentration \([I] = 2\,\text{mM}\).

Apparent Michaelis constant:

\[ K_M^{app} = K_M\!\left(1 + \frac{[I]}{K_I}\right) = 2\left(1 + \frac{2}{1}\right) = 2 \times 3 = 6\,\text{mM}. \]

The \(V_{max}\) is unchanged at \(10\,\mu\text{M/s}\). We now compare the rate-versus-substrate curves with and without the inhibitor at three representative substrate concentrations:

\[ \begin{array}{c|c|c} [S]\,(\text{mM}) & v \text{ (no inhibitor)} & v \text{ (with inhibitor)} \\ \hline 2 & 10 \times 2/(2+2) = 5.00 & 10 \times 2/(6+2) = 2.50 \\ 6 & 10 \times 6/(2+6) = 7.50 & 10 \times 6/(6+6) = 5.00 \\ 18 & 10 \times 18/(2+18) = 9.00 & 10 \times 18/(6+18) = 7.50 \end{array} \]

At \([S] = 6\,\text{mM}\) (the new \(K_M^{app}\)), the inhibited enzyme runs at half its \(V_{max}\) — the same rate the uninhibited enzyme achieves at only 2 mM substrate. Increasing \([S]\) to very high values will eventually drive both rates toward the same \(V_{max} = 10\,\mu\text{M/s}\), confirming that competitive inhibition is fully reversible at high substrate. In a cellular context, the cell can partially compensate for a competitive inhibitor by accumulating more substrate — exactly why many drug-resistant mutants overexpress the metabolite being competed over.

Example — Numerical Michaelis–Menten table: Suppose an enzyme has \(V_{max} = 10\,\mu\text{M/s}\) and \(K_M = 2\,\text{mM}\). Compute the rate \(v = V_{max}[S]/(K_M + [S])\) at several substrate concentrations: \[ \begin{array}{c|c|c} [S]\,(\text{mM}) & v\,(\mu\text{M/s}) & \% V_{max} \\ \hline 1 & 10 \times 1/(2+1) = 3.33 & 33\% \\ 2 & 10 \times 2/(2+2) = 5.00 & 50\% \\ 5 & 10 \times 5/(2+5) = 7.14 & 71\% \\ 10 & 10 \times 10/(2+10) = 8.33 & 83\% \\ 100 & 10 \times 100/(2+100) = 9.80 & 98\% \end{array} \]

As predicted by the definition, \(v(K_M) = v(2) = V_{max}/2 = 5\,\mu\text{M/s}\). At \([S] = 100\,\text{mM}\) (fifty times \(K_M\)), the enzyme is 98% saturated. The rate approaches \(V_{max}\) asymptotically — full saturation is never achieved at finite substrate concentration.

Substrate concentration for 90% of \(V_{max}\): We want \([S]\) such that \(v = 0.9\,V_{max}\):

\[ 0.9\,V_{max} = \frac{V_{max}[S]}{K_M + [S]} \implies 0.9(K_M + [S]) = [S] \implies [S] = \frac{0.9\,K_M}{0.1} = 9\,K_M. \]

For \(K_M = 2\,\text{mM}\), the substrate concentration giving 90% maximal rate is \([S]_{90} = 9 \times 2 = 18\,\text{mM}\). This confirms the general rule: achieving 90% saturation requires a substrate concentration nine times the Michaelis constant, illustrating how the hyperbolic curve becomes increasingly insensitive at high \([S]\).

Remark (Parameter Estimation — the Lineweaver–Burk Plot): Taking the reciprocal of the Michaelis–Menten equation gives the double reciprocal (Lineweaver–Burk) form: \[ \frac{1}{v} = \frac{K_M}{V_{max}} \cdot \frac{1}{[S]} + \frac{1}{V_{max}}. \] This is linear in \(1/[S]\): a plot of \(1/v\) (vertical axis) versus \(1/[S]\) (horizontal axis) is a straight line with slope \(K_M/V_{max}\) and \(y\)-intercept \(1/V_{max}\). The \(x\)-intercept is \(-1/K_M\). Although the Lineweaver–Burk plot has statistical drawbacks (errors in \(v\) are amplified near \([S] = 0\)), it remains a standard teaching tool for visualising the effect of inhibitors: competitive inhibition rotates the line about the \(y\)-intercept (unchanged \(V_{max}\)), while non-competitive inhibition shifts the line upward in parallel (unchanged slope, larger \(y\)-intercept).

2.4 Cooperativity and the Hill Equation

Many enzymes and receptors display cooperativity: binding of one ligand molecule alters the affinity of subsequent ligand molecules. A classic example is haemoglobin, where binding of one oxygen molecule increases affinity for subsequent molecules (positive cooperativity).

The Hill equation phenomenologically captures sigmoidal binding curves:

\[ v = \frac{V_{max}[S]^n}{K_{half}^n + [S]^n}, \]

where \(n\) is the Hill coefficient and \(K_{half}\) is the concentration giving half-maximal activity. For \(n = 1\), the Hill equation reduces to Michaelis–Menten kinetics. For \(n > 1\), the response is sigmoidal — the curve rises steeply around \(K_{half}\), creating a switch-like response that is highly biologically relevant. Hill coefficients \(n > 1\) indicate positive cooperativity; \(n < 1\) indicates negative cooperativity.

The sigmoidal response from cooperativity is a core design principle in cell biology: it allows biochemical switches that respond sharply to small changes in ligand concentration, filtering noise while amplifying biologically meaningful signals.

Remark (The Hill coefficient is phenomenological): The Hill coefficient \(n\) in the Hill equation is a curve-fitting parameter — it does not directly count the number of binding sites. For haemoglobin, which has four subunits and four oxygen-binding sites (so the maximum possible \(n\) is 4), the experimentally measured Hill coefficient is only \(n \approx 2.8\). Perfect cooperativity — in which binding of the first ligand immediately and completely locks all remaining sites into the high-affinity state — would give \(n = N\) (the total number of sites). Real proteins fall between the non-cooperative limit (\(n = 1\)) and the perfectly cooperative limit (\(n = N\)).

More mechanistic descriptions of cooperativity are provided by the Monod–Wyman–Changeux (MWC) model (concerted conformational change between T and R states) and the Koshland–Nemethy–Filmer (KNF) model (sequential induced-fit). Both models can reproduce the sigmoidal curve, but with different predictions for intermediate occupancy states that are experimentally distinguishable. The Hill equation trades this mechanistic detail for mathematical tractability: it provides an excellent phenomenological description but should not be interpreted as a literal physical mechanism. The appropriate use of the Hill equation is to quantify the sharpness of a sigmoidal response and to test whether a process is more or less cooperative than simple Michaelis–Menten, not to infer molecular mechanism.

Example — Hill equation: sigmoidal vs. hyperbolic response: Consider an enzyme with \(V_{max} = 20\,\mu\text{M/s}\) and \(K_{half} = 5\,\mu\text{M}\). Compare the Michaelis–Menten case (\(n=1\)) with a cooperative case (\(n=2\)):

For \(n=1\): \(v = 20[S]/(5+[S])\). For \(n=2\): \(v = 20[S]^2/(25+[S]^2)\).

\[ \begin{array}{c|c|c} [S]\,(\mu\text{M}) & v \text{ (n=1)} & v \text{ (n=2)} \\ \hline 2 & 20\times2/7 \approx 5.71 & 20\times4/29 \approx 2.76 \\ 5 & 20\times5/10 = 10.00 & 20\times25/50 = 10.00 \\ 10 & 20\times10/15 \approx 13.33 & 20\times100/125 = 16.00 \end{array} \]

The key observations are: (1) Both curves pass through half-maximal rate \(v = 10\) at \([S] = K_{half} = 5\), by definition. (2) For \([S] < K_{half}\), the cooperative curve (\(n=2\)) lies below the Michaelis–Menten curve — the enzyme is less active at low substrate. (3) For \([S] > K_{half}\), the cooperative curve rises more steeply toward \(V_{max}\). This creates the switch-like sigmoidal character: the enzyme is essentially off below \(K_{half}\) and essentially on above it, with a sharper transition than the hyperbolic Michaelis–Menten curve. The higher \(n\), the sharper the switch.


Chapter 3: Dynamics and Stability Analysis

3.1 Phase Plane Analysis

For a two-dimensional ODE system

\[ \frac{dx}{dt} = f(x, y), \qquad \frac{dy}{dt} = g(x, y), \]

the phase plane is the \((x, y)\) space in which the trajectory of a solution is traced. A central tool is the nullcline: the \(x\)-nullcline is the set of points where \(dx/dt = 0\) (i.e., \(f(x,y) = 0\)), and the \(y\)-nullcline where \(dy/dt = 0\). Fixed points (steady states) occur at the intersections of nullclines.

By drawing nullclines and noting the direction of flow in each region, one can often determine qualitative behaviour without solving equations explicitly. This geometric approach is particularly powerful for biological systems, where precise parameter values may be unknown.

3.2 Linearisation and the Jacobian

Near a fixed point \(\mathbf{x}^*\), the dynamics of small perturbations \(\delta\mathbf{x} = \mathbf{x} - \mathbf{x}^*\) are governed by the linearised system:

\[ \frac{d\,\delta\mathbf{x}}{dt} \approx J\,\delta\mathbf{x}, \]

where \(J\) is the Jacobian matrix evaluated at \(\mathbf{x}^*\):

\[ J_{ij} = \frac{\partial f_i}{\partial x_j}\bigg|_{\mathbf{x}^*}. \]

The stability of the fixed point is determined by the eigenvalues \(\lambda\) of \(J\). If all eigenvalues satisfy \(\text{Re}(\lambda) < 0\), the fixed point is locally asymptotically stable: perturbations decay and trajectories return to \(\mathbf{x}^*\). If any eigenvalue has \(\text{Re}(\lambda) > 0\), the fixed point is unstable.

Stability Classification (2D systems): For a 2×2 Jacobian with trace τ = tr(J) and determinant Δ = det(J):
  • Δ < 0: saddle point (always unstable; one eigenvalue positive, one negative)
  • Δ > 0, τ < 0: stable node or spiral (perturbations decay)
  • Δ > 0, τ > 0: unstable node or spiral
  • τ = 0, Δ > 0: centre (purely imaginary eigenvalues; linear neutrally stable)

3.3 Worked Linearisation: The Predator–Prey System

Before discussing bifurcations, we work through a complete linearisation example that illustrates the general method.

Example — Predator–prey (Lotka–Volterra) stability analysis: Consider the classical predator–prey system \[ \frac{dx}{dt} = \alpha x - \beta xy, \qquad \frac{dy}{dt} = \delta xy - \gamma y, \] where \(x\) is prey abundance, \(y\) is predator abundance, \(\alpha, \beta, \delta, \gamma > 0\). \[ (x^*, y^*) = \left(\frac{\gamma}{\delta},\, \frac{\alpha}{\beta}\right). \]\[ J = \begin{pmatrix} \alpha - \beta y & -\beta x \\ \delta y & \delta x - \gamma \end{pmatrix}. \]\[ J\big|_{(x^*,y^*)} = \begin{pmatrix} \alpha - \beta(\alpha/\beta) & -\beta(\gamma/\delta) \\ \delta(\alpha/\beta) & \delta(\gamma/\delta) - \gamma \end{pmatrix} = \begin{pmatrix} 0 & -\beta\gamma/\delta \\ \delta\alpha/\beta & 0 \end{pmatrix}. \]

Eigenvalues: The characteristic polynomial is \(\lambda^2 - \text{tr}(J)\lambda + \det(J) = \lambda^2 + (\beta\gamma/\delta)(\delta\alpha/\beta) = \lambda^2 + \alpha\gamma = 0\). So \(\lambda = \pm i\sqrt{\alpha\gamma}\) — purely imaginary eigenvalues.

Classification: The linear analysis gives a centre (\(\tau = 0\), \(\Delta > 0\)). Linear theory predicts neutral stability — perturbations neither grow nor decay but orbit the fixed point.

\[ H(x,y) = \delta x - \gamma\ln x + \beta y - \alpha\ln y = \text{const}. \]

One can verify \(dH/dt = 0\) along solutions. This means orbits are truly closed — the predator–prey cycles are periodic, not merely approximately so. The conserved quantity \(H\) is the biological analogue of energy in a Hamiltonian system.

Remark: The purely imaginary eigenvalues at the Lotka–Volterra fixed point are a degenerate case: linear theory cannot determine whether the nonlinear system has a centre, stable spiral, or unstable spiral. The Lyapunov function \(H\) resolves this ambiguity and confirms true periodicity. In real ecological or biochemical systems, such perfect conservation is fragile — small modifications (e.g., logistic prey growth) typically break the conservation law and convert the centre to a stable spiral or a limit cycle.
Example — Brusselator: linearisation and stability classification: The Brusselator is a minimal two-variable model of an autocatalytic oscillator. With species \(x\) and \(y\) and parameters \(a > 0\), \(b > 0\): \[ \frac{dx}{dt} = 1 - (b+1)x + ax^2 y, \qquad \frac{dy}{dt} = bx - ax^2 y. \]

Steady state: Adding the two equations gives \(d(x+y)/dt = 1 - x\), so at steady state \(x^* = 1\). Substituting into the \(dy/dt = 0\) equation: \(b - ay^* = 0\), giving \(y^* = b/a\). The unique positive steady state is \((x^*, y^*) = (1, b/a)\).

Jacobian: The partial derivatives of \(f = 1 - (b+1)x + ax^2 y\) and \(g = bx - ax^2 y\) are:

\[ J = \begin{pmatrix} -(b+1) + 2axy & ax^2 \\ b - 2axy & -ax^2 \end{pmatrix}. \]

Evaluating at \((1, b/a)\):

\[ J\big|_{(1,\,b/a)} = \begin{pmatrix} -(b+1) + 2b & a \\ b - 2b & -a \end{pmatrix} = \begin{pmatrix} b - 1 & a \\ -b & -a \end{pmatrix}. \]

Numerical case \(a = 1\), \(b = 2\): The Jacobian becomes

\[ J = \begin{pmatrix} 1 & 1 \\ -2 & -1 \end{pmatrix}. \]

The trace is \(\tau = 1 + (-1) = 0\) and the determinant is \(\Delta = (1)(-1) - (1)(-2) = -1 + 2 = 1 > 0\). Since \(\tau = 0\) and \(\Delta > 0\), the eigenvalues are purely imaginary: \(\lambda = \pm i\sqrt{\Delta} = \pm i\). The linear analysis classifies this as a centre — the system is on the boundary between stable and unstable. For the Brusselator, a nonlinear analysis (or direct numerical simulation) shows that this boundary case is in fact a Hopf bifurcation point: for \(b > a + 1\), the trace \(\tau = b - 1 - a > 0\) and the steady state is unstable, with trajectories spiralling outward onto a stable limit cycle. The Brusselator therefore exhibits sustained oscillations whenever \(b > a + 1\).

3.4 Bifurcation Theory

A bifurcation occurs when a qualitative change in system behaviour (number or stability of fixed points, appearance of a limit cycle) happens as a parameter is continuously varied. Bifurcation analysis reveals how biological systems switch between functional states.

Saddle-node bifurcation: As a parameter \(\mu\) increases through a critical value \(\mu_c\), two fixed points (one stable, one unstable) collide and annihilate. For \(\mu < \mu_c\) there are two fixed points; for \(\mu > \mu_c\) there are none. This mechanism underlies the sudden appearance or disappearance of stable states in biological switches.

Definition (Hopf Bifurcation): A Hopf bifurcation occurs at a fixed point \(\mathbf{x}^*(\mu)\) as a parameter \(\mu\) passes through a critical value \(\mu_c\) if:
  1. At \(\mu = \mu_c\), the Jacobian has a pair of purely imaginary eigenvalues \(\lambda = \pm i\omega\) (\(\omega > 0\)) and no zero eigenvalue.
  2. The eigenvalues cross the imaginary axis transversally: \(\frac{d}{d\mu}\text{Re}(\lambda(\mu))\big|_{\mu=\mu_c} \neq 0\).
In a supercritical Hopf bifurcation, a stable limit cycle of small amplitude is born as \(\mu\) crosses \(\mu_c\) in the direction of instability. In a subcritical Hopf bifurcation, an unstable limit cycle shrinks onto the fixed point as \(\mu\) crosses \(\mu_c\), and the resulting instability may jump to a large-amplitude limit cycle or unbounded behaviour.

The Hopf bifurcation is the primary mechanism generating sustained oscillations in biological models — circadian rhythms, cell cycle oscillations, and neural firing all arise through Hopf bifurcations.

Example — The genetic toggle switch and saddle-node bifurcation: The toggle switch of Gardner et al. (2000) consists of two genes that mutually repress each other. Let \(u\) and \(v\) denote the protein concentrations of repressor 1 and repressor 2 respectively. With Hill-function repression and equal parameters for simplicity: \[ \frac{du}{dt} = \frac{\alpha_1}{1 + v^\beta} - u, \qquad \frac{dv}{dt} = \frac{\alpha_2}{1 + u^\gamma} - v. \]

Nullcline analysis: The \(u\)-nullcline (where \(du/dt = 0\)) gives \(u = \alpha_1 / (1 + v^\beta)\), a decreasing function of \(v\). The \(v\)-nullcline gives \(v = \alpha_2 / (1 + u^\gamma)\), a decreasing function of \(u\). Equivalently, on the \((u, v)\) plane, the \(v\)-nullcline can be written as \(u = ((\alpha_2/v) - 1)^{1/\gamma}\).

Monostable regime: If \(\alpha_1\), \(\alpha_2\) are small or the Hill coefficients \(\beta\), \(\gamma\) are close to 1 (weak cooperativity), the two nullclines cross at exactly one point — the system has a unique stable steady state.

Bistable regime: As \(\alpha_1\), \(\alpha_2\) increase or the cooperativity (\(\beta\), \(\gamma\)) strengthens, the nullclines develop an S-shape and can cross at three points. The two outer intersections are stable nodes; the middle intersection is a saddle (unstable). The transition from one intersection to three intersections occurs via a pair of saddle-node bifurcations — one stable node and one saddle appear simultaneously as a parameter is tuned.

The two stable states correspond to “gene 1 ON, gene 2 OFF” and “gene 1 OFF, gene 2 ON” — two distinct cellular phenotypes. A transient signal (e.g., a pulse of inducer) can push the system past the saddle point (separatrix), flipping it irreversibly into the other state even after the signal is removed. This is the mathematical basis of bistable biological memory, and it was engineered synthetically by Gardner et al. in E. coli.

3.5 Limit Cycles and the Goodwin Oscillator

A limit cycle is an isolated closed trajectory in the phase plane: nearby trajectories spiral toward it (stable limit cycle) or away from it (unstable limit cycle). Unlike the closed orbits of the Lotka–Volterra system (which form a continuous family parameterised by the conserved \(H\)), a limit cycle is structurally stable — it persists under small perturbations of the system.

Definition (Limit Cycle): A limit cycle is an isolated periodic orbit \(\Gamma\) in the phase space of a continuous dynamical system. It is stable (attracting) if neighbouring trajectories converge to \(\Gamma\) as \(t \to \infty\), and unstable (repelling) if they diverge. Stable limit cycles are the mathematical model for sustained biological oscillations.

The Goodwin oscillator is the canonical minimal model for a biological negative-feedback oscillator — a single gene producing a protein that ultimately represses its own synthesis through a chain of intermediate steps. With \(m\) intermediate species \(X_1, \ldots, X_m\) and the last species repressing production of the first:

\[ \frac{dX_1}{dt} = \frac{v_1}{1 + (X_m/K)^n} - k_1 X_1, \]\[ \frac{dX_i}{dt} = k_{i-1} X_{i-1} - k_i X_i, \quad i = 2, \ldots, m. \]

Each step degrades the previous species and produces the next, with Hill-function repression closing the loop.

Example — Goodwin oscillator: why \(n \geq 8\) is needed (qualitative analysis): For the Goodwin model to exhibit limit cycle oscillations, the unique interior fixed point must be unstable, requiring a Hopf bifurcation. Linear stability analysis of the fixed point produces the characteristic polynomial \[ \lambda^m + k^m + \frac{v_1 n k^m / K^n}{(1 + (X_m^*/K)^n)^2} \cdot \frac{1}{\prod_{i=1}^m (\lambda + k_i)} = 0, \]

which at the bifurcation point has purely imaginary roots \(\lambda = \pm i\omega\). A classical result (Griffith, 1971) shows that for the symmetric case (\(k_1 = \cdots = k_m = k\)), oscillations are possible only if the Hill coefficient satisfies:

\[ n > \frac{\sin(m\pi/(2m+1))}{\cos(m\pi/(2m+1))} \cdot \text{(lower bound)}. \]

For \(m = 3\) steps, this yields the requirement \(n > 8\). Intuitively: with only 3 intermediate steps, the delay introduced by the chain is modest, and the feedback loop needs very steep (highly cooperative) repression (\(n\) large) to sustain oscillations. With more steps (\(m\) larger), the effective delay is greater and oscillations arise at lower \(n\). The Drosophila circadian clock, for example, achieves the necessary delay through a long chain of phosphorylation events and nuclear transport steps, making \(n\) need not be biologically unrealistic.

3.7 Bistability

A system is bistable if it has two coexisting stable steady states. In the phase plane, this typically appears as an S-shaped (sigmoidal) nullcline intersecting another nullcline at three points: two stable nodes separated by an unstable saddle. The two stable states correspond to distinct biological phenotypes, and the saddle defines the threshold (separatrix) between the two basins of attraction.

Bistability requires some form of positive feedback or cooperative nonlinearity. Once a bistable system commits to one state, it remains there even if the stimulus that triggered the switch is removed — the system exhibits hysteresis as a parameter sweeps back and forth. This irreversibility is fundamental to biological decision-making (cell differentiation, the mitotic switch).

Remark (Cell differentiation as bistability): During embryonic development, a pluripotent stem cell must commit to one of several terminal lineages — muscle, neuron, liver cell, etc. At the molecular level, each lineage is characterised by a distinct gene expression program maintained by transcription factor networks with mutual positive feedback (e.g., two lineage-specifying factors each activate themselves and repress the other). The resulting bistable or multi-stable network means that the cell has multiple attractors, each corresponding to a different lineage.

External signals (morphogens such as BMP, Wnt, or Notch ligands) act as bifurcation parameters: at low concentrations, only one attractor is accessible; as the morphogen concentration increases past a threshold (the bifurcation point), a new attractor appears and the original one destabilises. Once the cell crosses the separatrix and enters the new basin of attraction, it remains committed to that lineage even after the morphogen signal is withdrawn. This is the epigenetic landscape picture of Waddington (1957), now given a precise mathematical interpretation through bistability and bifurcation theory. Importantly, the irreversibility of differentiation is not a property of any single gene but of the network topology — a sobering reminder that modelling individual molecular interactions without the network context misses the essential biology.

3.8 Separation of Time Scales

When a system contains processes occurring at very different rates, one can exploit separation of time scales to reduce model complexity. Fast variables relax quickly to a manifold determined by the slow variables; one can then apply a quasi-steady-state approximation for the fast variables and obtain a lower-dimensional system for the slow dynamics. This is the mathematical justification for the Michaelis–Menten QSSA and underlies many model reductions in cell biology.

Example — Time-scale separation in a two-variable system: Consider \[ \frac{dx}{dt} = -x + y^2, \qquad \frac{dy}{dt} = -\frac{1}{\varepsilon}(y - \sin x), \] with \(\varepsilon \ll 1\) (e.g., \(\varepsilon = 0.01\)). The variable \(y\) is fast: it relaxes to \(y \approx \sin x\) on a time scale \(\varepsilon\), while \(x\) evolves slowly. Setting \(dy/dt \approx 0\) gives the **slow manifold** \(y^* = \sin x\). Substituting back: \(dx/dt \approx -x + \sin^2 x\). This single ODE governs the slow dynamics. For small \(x\), \(\sin x \approx x\), so \(dx/dt \approx -x + x^2\), a logistic-like equation with equilibria at \(x = 0\) (stable) and \(x = 1\) (unstable). The two-dimensional system has been reduced to a one-dimensional problem on the slow manifold, with a great reduction in complexity and a transparent qualitative picture.
Remark (Validity criterion for the QSSA — Segel–Slemrod, 1989): The quasi-steady-state approximation is often justified by the intuitive condition \(E_{tot} \ll [S]_0\) — the enzyme is much less abundant than the substrate. This condition was used for decades but is actually incorrect in general. The rigorous validity criterion, derived by Segel and Slemrod (1989) using singular perturbation theory, is: \[ E_{tot} \ll K_M + [S]_0. \]

This is the Segel–Slemrod criterion. The key difference emerges when the initial substrate concentration \([S]_0\) is small — in that regime, the classical condition \(E_{tot} \ll [S]_0\) can be violated even when the QSSA is perfectly valid (because \(K_M\) may be large). Conversely, the QSSA may break down if \([S]_0 \gg K_M\), even when \(E_{tot} \ll [S]_0\) is satisfied, because the substrate is rapidly consumed. The correct intuition is that the QSSA is valid when the enzyme concentration is small relative to the total amount of substrate that the enzyme is capable of binding simultaneously — and this total capacity is set by \(K_M + [S]_0\), not \([S]_0\) alone. The lesson is broader: approximations in nonlinear models must be validated by asymptotic analysis, not merely by dimensional comparison.


Chapter 4: Metabolic Networks

4.1 Metabolic Pathways and Feedback Inhibition

Metabolic networks consist of chains and cycles of enzyme-catalysed reactions that convert nutrient substrates into energy and biosynthetic precursors. A canonical motif is the linear biosynthetic pathway:

\[ S \xrightarrow{E_1} M_1 \xrightarrow{E_2} M_2 \xrightarrow{E_3} P, \]

where \(S\) is the substrate, \(M_i\) are intermediates, and \(P\) is the end product. A common regulatory design is end-product (feedback) inhibition: the end product \(P\) inhibits the first enzyme \(E_1\), preventing wasteful overproduction of \(P\) when its concentration is already high. This is a classic example of negative feedback securing homeostasis — maintaining product levels near a set point despite fluctuating inputs.

The inhibition is often allosteric: \(P\) binds a regulatory site on \(E_1\), inducing a conformational change that reduces activity. The Hill equation captures the sigmoidal dependence of this inhibition on \([P]\).

4.2 Metabolic Control Analysis

A central question in metabolic network modelling is: how is control over the steady-state metabolic flux distributed among the enzymes? Metabolic Control Analysis (MCA) provides a quantitative framework through dimensionless sensitivity coefficients.

The flux control coefficient measures the relative change in steady-state flux \(J\) in response to a relative change in the activity (or level) of enzyme \(E_i\):

\[ C^J_{E_i} = \frac{\partial J}{\partial E_i} \cdot \frac{E_i}{J} = \frac{d\ln J}{d\ln E_i}. \]

A coefficient \(C^J_{E_i} = 0.8\) means that a 1% increase in enzyme \(i\) increases steady-state flux by 0.8% — enzyme \(i\) exerts substantial control.

Summation Theorem (MCA): For any metabolic pathway, the flux control coefficients over all enzymes sum to unity: \[ \sum_i C^J_{E_i} = 1. \]

This theorem, independently discovered by Burns, Kacser, and Heinrich and Rapoport in the early 1970s, is more surprising than it looks. The naive biochemist’s intuition is that one enzyme — the “rate-limiting step” — controls the flux, with all others playing a secondary role. If that were true, the sum would be dominated by a single coefficient near 1, with the rest near 0. What Kacser and Burns showed experimentally is that control is typically distributed: in many real pathways, no single enzyme has \( C^J_{E_i} > 0.5 \), and the sum constraint forces the coefficients to compete. The theorem thus quantitatively refutes the “rate-limiting step” dogma and suggests that interventions (drug targets, metabolic engineering) should account for systemic redistribution of control, not just the single bottleneck enzyme. It is a conservation law for regulatory influence.

Elasticity coefficients describe how an individual enzyme’s rate responds to changes in its substrates or products:

\[ \varepsilon^{v_i}_{S} = \frac{\partial \ln v_i}{\partial \ln [S]}. \]

These are local properties of individual enzymes (measured in isolation), while flux control coefficients are global properties of the network. The connectivity theorem relates the two:

\[ \sum_i C^J_{E_i} \varepsilon^{v_i}_{S} = 0, \]

for any intermediate metabolite \(S\). MCA thus provides a powerful framework for understanding how enzymatic parameters determine the behaviour of an entire metabolic system.

Example — Flux control in a two-enzyme linear pathway with Michaelis–Menten kinetics: Consider the pathway \(A \xrightarrow{E_1} B \xrightarrow{E_2} C\), where substrate \(A\) is maintained at fixed concentration \([A]_0\) (an input pool), and the product \(C\) is drained at a fixed rate. At steady state, both enzymes carry the same flux \(J\). Using Michaelis–Menten kinetics for each enzyme, the elasticity coefficients of \(E_1\) with respect to \([B]\) (its product) and \(E_2\) with respect to \([B]\) (its substrate) encode how sensitive each enzyme is to the shared intermediate \([B]\).

For a Michaelis–Menten enzyme operating at fractional saturation \(\theta = [S]/(K_M + [S])\), the substrate elasticity is \(\varepsilon^v_S = 1 - \theta\). When \(E_1\) is far from saturation (\([B] \ll K_M^{(1)}\), so \(\varepsilon^{v_1}_B \approx -1\) for product inhibition if present, but for a linear pathway with no product inhibition the key control is through \([A]\) for \(E_1\)).

In the limiting case where \(E_1\) operates far from substrate saturation (linear kinetics) and \(E_2\) operates near saturation (zero-order kinetics):

  • \(E_1\) rate is sensitive to \([B]\) changes — it can adjust its flux rapidly.
  • \(E_2\) rate is insensitive to \([B]\) changes (saturated) — it cannot accelerate even if more \(B\) accumulates.

In this regime, the flux control analysis gives \(C^J_{E_1} \to 1\) and \(C^J_{E_2} \to 0\) (most control rests with \(E_1\)), and the summation theorem \(C^J_{E_1} + C^J_{E_2} = 1\) is satisfied. Conversely, when \(E_1\) is saturated and \(E_2\) operates in the linear regime, \(C^J_{E_1} \to 0\) and \(C^J_{E_2} \to 1\). The summation theorem forces a conservation law on control: as one enzyme gains control, another must lose it, preventing any single coefficient from exceeding 1 or the total from deviating from unity.

4.3 Flux Control Coefficients: A Worked Example

The summation theorem is most illuminating through a concrete calculation.

Example — Flux control in a two-enzyme pathway: Consider a linear pathway \(S \xrightarrow{E_1} M \xrightarrow{E_2} P\), with Michaelis–Menten kinetics for each enzyme. Suppose that at the operating steady state, MCA gives flux control coefficients \(C^J_{E_1} = 0.8\) and \(C^J_{E_2} = 0.2\) (the summation theorem is satisfied: \(0.8 + 0.2 = 1\)). \[ \frac{\Delta J}{J} \approx C^J_{E_2} \cdot \frac{\Delta E_2}{E_2} = 0.2 \times (-0.5) = -0.10. \]

The pathway flux decreases by approximately 10% — a non-negligible effect despite \(E_2\) not being the “rate-limiting” enzyme. The naive intuition is wrong; even the less-controlling enzyme contributes measurably.

Drug target implications: If we aim to reduce flux by 50% (e.g., to inhibit a pathogen), inhibiting \(E_1\) alone requires achieving \(-0.50/0.8 = -62.5\%\) reduction in \(E_1\) activity. Inhibiting \(E_2\) alone requires \(-0.50/0.2 = -250\%\) — impossible. A combination targeting both enzymes could achieve the goal at lower doses of each inhibitor, with potential for synergy. This systems-level reasoning, enabled by MCA, is the basis for rational polypharmacology in metabolic diseases.

4.4 Flux Balance Analysis

For large-scale metabolic networks (thousands of reactions), the full kinetic details are unknown. Flux balance analysis (FBA) bypasses kinetics by imposing the steady-state constraint \(S \cdot \mathbf{v} = \mathbf{0}\) and then using linear programming to find the flux distribution \(\mathbf{v}\) that maximises a biological objective (typically growth or ATP production), subject to thermodynamic and capacity constraints. FBA has been remarkably successful at predicting growth phenotypes and the effects of gene knockouts in bacteria and yeast.

Remark (FBA predictive accuracy and the optimality assumption): FBA applied to the genome-scale metabolic model of Escherichia coli (iJR904, with 904 genes and over 900 reactions) correctly predicts growth rates on different carbon sources with surprising quantitative accuracy. More strikingly, gene knockout predictions — whether deleting a specific metabolic gene kills the cell or is neutral — are correct in approximately 80–90% of cases when compared to experimental Keio collection data. This is remarkable given that FBA contains no kinetic parameters whatsoever, only stoichiometry and capacity bounds.

The key biological assumption is that cells maximise growth rate — an objective refined by millions of years of natural selection under nutrient competition. This assumption is not universally valid: under stress conditions, cells may trade growth for stress tolerance (triggering the stringent response); cancer cells often adopt a Warburg metabolism (aerobic glycolysis) that sacrifices energetic efficiency for biosynthetic precursor production; and stationary-phase bacteria downregulate growth in favour of survival. In these contexts, alternative objective functions (maximising ATP yield, minimising nutrient waste, or multi-objective Pareto fronts) give better predictions. Nevertheless, growth maximisation serves as a useful, falsifiable null hypothesis, and its broad success in bacteria validates the core assumption that natural selection has optimised metabolic flux toward maximal proliferation under nutrient-replete conditions.

Example — FBA on a toy network: Consider a minimal network with 2 internal metabolites \(A\), \(B\) and 3 reactions: \[ v_1: \emptyset \to A \quad (uptake), \qquad v_2: A \to B \quad (conversion), \qquad v_3: B \to \emptyset \quad (export). \]\[ S = \begin{pmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \end{pmatrix}. \]\[ v_1 - v_2 = 0, \qquad v_2 - v_3 = 0. \]

Hence \(v_1 = v_2 = v_3 =: J\). The flux distribution collapses to a single free parameter — the through-flux \(J\).

\[ J^* = 5, \quad v_1^* = v_2^* = v_3^* = 5. \]

All reactions operate at the maximum allowed rate dictated by the uptake constraint.

Gene knockout: If the gene encoding \(E_2\) is deleted, \(v_2 = 0\) is forced, which forces \(J = 0\) — the entire pathway is blocked. FBA correctly predicts a lethal knockout, even without knowing any kinetic parameters. This is why FBA is the tool of choice for genome-scale metabolic modelling.


Chapter 5: Signal Transduction

5.1 Receptor–Ligand Binding

Cells communicate through signal transduction: extracellular signals (hormones, growth factors, neurotransmitters) bind to cell-surface receptors, triggering intracellular cascades. The basic binding reaction is

\[ R + L \underset{k_{-1}}{\stackrel{k_1}{\rightleftharpoons}} RL, \]

where \(R\) is the free receptor and \(L\) is the ligand. At steady state, the fraction of occupied receptors follows a hyperbolic (Michaelis–Menten-like) curve:

\[ f = \frac{[L]}{K_D + [L]}, \]

where \(K_D = k_{-1}/k_1\) is the dissociation constant (lower \(K_D\) means higher affinity). This is the dose-response relationship: it describes how the cellular response varies with ligand concentration. The steepness of the dose-response curve — its sensitivity — determines how precisely a cell can detect changes in ligand concentration.

5.2 Dose-Response: A Numerical Example

Example — Insulin receptor occupancy at physiological concentrations: The insulin receptor has a dissociation constant \(K_D \approx 0.5\,\text{nM}\). Plasma insulin concentrations fluctuate between approximately 0.05 nM in the fasting state and 0.5 nM after a meal. The fraction of occupied receptors is: \[ f = \frac{[L]}{K_D + [L]}. \]

Fasting state (\([L] = 0.05\,\text{nM}\)):

\[ f_{fasting} = \frac{0.05}{0.5 + 0.05} = \frac{0.05}{0.55} \approx 0.091 \approx 9\%. \]

Post-meal state (\([L] = 0.5\,\text{nM} = K_D\)):

\[ f_{meal} = \frac{0.5}{0.5 + 0.5} = \frac{0.5}{1.0} = 0.50 = 50\%. \]

A tenfold increase in plasma insulin (from 0.05 nM to 0.5 nM) increases receptor occupancy from 9% to 50% — a 5.6-fold increase in the signalling input. This steep dependence near \(K_D\) places the physiological operating range in the most sensitive region of the hyperbolic dose-response curve. Type 2 diabetes is associated with insulin resistance: downstream signalling components are desensitised so that even 50% receptor occupancy fails to trigger adequate glucose uptake. Therapeutic strategies to restore sensitivity (e.g., PPAR\(\gamma\) agonists) act not at the receptor but at intracellular steps far downstream.

Example — Receptor occupancy as a function of ligand concentration: Suppose a receptor has dissociation constant \(K_D = 10\,\text{nM}\). The fraction of receptors occupied at ligand concentration \([L]\) is \[ f([L]) = \frac{[L]}{K_D + [L]} = \frac{[L]}{10 + [L]}. \] Computing at several concentrations: \[ \begin{array}{c|c} [L]\,(\text{nM}) & f \\ \hline 1 & 1/11 \approx 0.091 \\ 10 & 10/20 = 0.500 \\ 100 & 100/110 \approx 0.909 \end{array} \]

At \([L] = K_D = 10\,\text{nM}\), exactly half the receptors are occupied, confirming the definition. To go from 9% to 91% occupancy (a tenfold increase in response), the ligand concentration must increase 100-fold (from 1 nM to 100 nM). This sensitivity — a 100-fold stimulus range for a near-complete response — is characteristic of a hyperbolic dose-response. Compare this to the Hill equation with \(n = 2\): the equivalent 10%-to-90% occupancy range requires only a \(\sqrt{81} = 9\)-fold concentration change. Cooperativity dramatically compresses the response range, enabling sharper cellular decision-making.

5.3 Signalling Cascades and the MAPK Cascade

Many signalling pathways employ multi-tier kinase cascades. The MAPK (MAP kinase) cascade is a three-tier phosphorylation cascade that amplifies upstream signals and is involved in cell proliferation, differentiation, and stress responses. At each tier, a kinase is activated (by phosphorylation) and in turn activates the next kinase, while phosphatases oppose each activation.

A remarkable property discovered by Goldbeter and Koshland is zero-order ultrasensitivity. When the kinase and phosphatase operating on a substrate both work near saturation (zero-order kinetics), the steady-state phosphorylation level is an extremely steep sigmoid function of the kinase-to-phosphatase ratio. The effective Hill coefficient of this response can be much greater than 1 even for non-cooperative enzymes.

Goldbeter–Koshland Ultrasensitivity: When both the activating enzyme (kinase) and deactivating enzyme (phosphatase) of a substrate operate at saturation (Michaelis–Menten \([S] \gg K_M\)), the steady-state fraction of activated substrate switches sharply with changes in the kinase/phosphatase activity ratio. This mechanism can generate switch-like responses with high cooperativity (large apparent Hill coefficient) in the absence of allosteric cooperativity.
Remark (The Goldbeter–Koshland switch: from controversy to canonical mechanism): When Goldbeter and Koshland published their analysis of zero-order ultrasensitivity in 1981, the result was met with scepticism. The key assumption — that both the activating kinase and the deactivating phosphatase operate near saturation (in vivo) — seemed implausible to many experimentalists, who assumed that enzymes work in the linear (first-order) regime under physiological conditions. It was not until the 1990s and 2000s, with quantitative proteomics and in-cell FRET measurements of kinase/phosphatase activities in signalling cascades, that near-zero-order conditions were confirmed experimentally in several systems. Today, zero-order ultrasensitivity is recognised as a widespread and important mechanism in eukaryotic signalling: it converts graded kinase activity into near-binary substrate phosphorylation states, enabling sharp decision thresholds without requiring allosteric cooperativity. The MAPK cascade, the cell cycle CDK–APC/C network, and numerous other regulatory circuits exploit this mechanism. The Goldbeter–Koshland paper is now cited over 3,000 times — a measure of how profoundly the experimental confirmation of a theoretical prediction can reshape a field.

The steady-state fraction of activated substrate \(W^*\) at each tier satisfies the Goldbeter–Koshland equation. If we denote the kinase activity \(u\), phosphatase activity \(v\), and Michaelis constants \(J = K_M^{kinase}/[S_{total}]\) and \(K = K_M^{phosphatase}/[S_{total}]\), then

\[ W^* = GK(u, v, J, K) = \frac{2u J}{v - u + vJ + uK + \sqrt{(v - u + vJ + uK)^2 - 4(v-u)uJ}}. \]

This function transitions sharply from 0 to 1 as the ratio \(u/v\) crosses 1, with the sharpness increasing as \(J\) and \(K\) decrease (i.e., as both enzymes approach saturation). When three such zero-order ultrasensitive stages are cascaded in the MAPK pathway, the overall input-output response can achieve an apparent Hill coefficient exceeding 30 — far more switch-like than anything achievable by allosteric cooperativity alone.

5.4 Adaptation

Many signalling systems display perfect adaptation: after a sustained step change in stimulus, the response returns exactly to its pre-stimulus level, encoding only changes in stimulus rather than absolute level. This is analogous to derivative sensing.

Perfect adaptation requires integral feedback control — a special network topology in which the output feeds back through an integrating node. Mathematically, perfect adaptation requires specific network motifs: a regulatory circuit that effectively computes the time integral of the output error. The analysis of adaptation in bacterial chemotaxis (Barkai and Leibler, 1997) showed that near-perfect adaptation can be achieved in a robust, parameter-insensitive manner through a methylation-based integral feedback.

Example — Minimal model of perfect adaptation: A two-component system can achieve perfect adaptation. Let \(A(t)\) be the output (e.g., receptor activity) and \(B(t)\) an internal adaptation variable: \[ \frac{dA}{dt} = f(I) - kAB, \qquad \frac{dB}{dt} = \ell(A - A_0). \] Here \(I\) is the stimulus input, \(f(I)\) its effect on activity, \(k\) the inactivation rate constant, \(\ell\) the adaptation rate, and \(A_0\) the basal activity set point. At steady state, \(dB/dt = 0\) requires \(A^* = A_0\) — the steady-state activity is always equal to \(A_0\), regardless of the input \(I\). This is perfect adaptation: a step change in \(I\) transiently perturbs \(A\), but \(B\) adjusts until \(A\) returns exactly to \(A_0\). The variable \(B\) acts as the integral of the activity error \((A - A_0)\), which is the hallmark of integral feedback control. The robustness to parameter variation arises because the steady-state condition \(A^* = A_0\) does not involve \(f(I)\), \(k\), or \(\ell\) — it follows purely from the circuit topology.
Example — Bacterial chemotaxis as a perfect adaptation system: Escherichia coli navigates chemical gradients by alternating between runs (smooth swimming) and tumbles (random reorientation). The chemoreceptors controlling tumbling frequency are methylated by the methyltransferase CheR and demethylated by the methylesterase CheB. The key regulatory feature is that CheB is activated by the phosphorylated response regulator CheY — the very molecule that increases tumbling frequency.

Let \(m\) denote the receptor methylation level and \(a\) the receptor activity. A minimal model captures the integral feedback:

\[ \frac{dm}{dt} = \underbrace{k_R (1 - a)}_{\text{CheR methylates when activity is low}} - \underbrace{k_B\,a\,m}_{\text{CheB demethylates when activity is high}}. \]

At steady state, \(dm/dt = 0\) requires a specific balance between methylation and demethylation. The key insight is that the steady-state activity \(a^*\) is uniquely determined by the kinetic parameters \(k_R\) and \(k_B\) — it does not depend on the external attractant concentration \([L]\). This can be seen by solving:

\[ k_R(1 - a^*) = k_B a^* m^* \implies a^* = \frac{k_R}{k_R + k_B m^*}. \]

When attractant concentration suddenly increases, receptor activity \(a\) drops (attractant inhibits the active state). This activates CheR-mediated methylation and inhibits CheB-mediated demethylation, causing \(m\) to rise. As \(m\) rises, receptor activity is restored. The system acts as an integral controller: the methylation \(m\) integrates the error (deviation of \(a\) from its set-point \(a^*\)) and adjusts until \(a\) returns exactly to \(a^*\), regardless of the new background attractant level.

Behaviourally, this means E. coli can navigate attractant gradients spanning more than a 1000-fold concentration range: it adapts exactly to each new background level and detects only relative changes — a cellular analogue of the Weber–Fechner law in sensory perception. The robustness of this adaptation to parameter variation (demonstrated by Barkai and Leibler in 1997) is a direct consequence of the integral feedback topology and does not require fine-tuning of rate constants.

5.5 Signalling Network Motifs

Biological signalling networks are not random wiring diagrams — they are built from recurring network motifs, small subgraphs that appear far more frequently than expected by chance. Uri Alon’s systematic analysis of the transcription network of E. coli identified the feedforward loop (FFL) as one of the most common motifs. In a coherent type-1 FFL, a transcription factor X activates both Y and Z, and Y also activates Z. The temporal logic of this circuit functions as a sign-sensitive delay: Z is activated only after a sustained (not transient) stimulus to X, while Z turns off rapidly when X is inactivated. This filtering of transient signals is biologically important — it prevents cellular responses to noise while still responding to genuine sustained signals.

The mathematical analysis of FFLs uses first-order linear ODEs: \(Z' = \beta_Z - \gamma_Z Z\) (when X and Y are both active), and one shows that the time to half-maximal Z expression is \(t_{1/2} = \ln(2)/\gamma_Z\), independent of the exact parameter values. The delay is thus robust.

5.6 Bistability and Oscillations in Signalling

Positive feedback in signalling can generate bistability, creating all-or-nothing switches. The activation of MPF (Maturation Promoting Factor) during mitotic entry is bistable: a cell either commits to division or stays in interphase, with hysteresis ensuring the commitment is irreversible.

Negative feedback with time delay generates sustained oscillations. The NF-κB signalling system oscillates due to negative feedback through the inhibitor IκBα, which is transcriptionally induced by NF-κB itself but requires time for synthesis and translocation. Similarly, calcium oscillations arise from the interplay of IP3-mediated calcium release and calcium-dependent inactivation — a system amenable to Hopf bifurcation analysis.

5.6 Gene Regulatory Networks: The Toggle Switch

Gene expression is regulated by transcription factors — proteins that bind specific DNA sequences and either activate or repress transcription. Networks of transcription factors implement computation at the cellular level.

Definition (Toggle Switch): A genetic toggle switch is a synthetic gene circuit consisting of two genes, each encoding a transcriptional repressor of the other. With Hill-function repression of Hill coefficient \(n\): \[ \frac{du}{dt} = \frac{\alpha_1}{1 + v^n} - u, \qquad \frac{dv}{dt} = \frac{\alpha_2}{1 + u^n} - v, \] where \(u\) and \(v\) are the (dimensionless) protein concentrations of the two repressors, and \(\alpha_1\), \(\alpha_2\) are the maximal transcription rates. The system was first constructed and analysed by Gardner, Cantor, and Collins (Gardner et al., Nature 2000).
\[ u\text{-nullcline: } u = \frac{\alpha_1}{1 + v^n}, \qquad v\text{-nullcline: } v = \frac{\alpha_2}{1 + u^n}. \]

For small \(n\) (weak cooperativity) and moderate \(\alpha_1 = \alpha_2\), the two nullclines intersect only once, at the symmetric fixed point \(u^* = v^*\) — the system is monostable. For large \(n\) (strong cooperativity) or large \(\alpha\), the nullclines develop an S-shaped character and intersect at three points: two stable nodes and one unstable saddle. The system is then bistable: it can reside in either “gene \(u\) on / gene \(v\) off” or “gene \(u\) off / gene \(v\) on”, corresponding to distinct cellular phenotypes.

Example — Bistability condition for the toggle switch: Take \(\alpha_1 = \alpha_2 = \alpha\) and \(n = 2\). The nullclines are \(u = \alpha/(1+v^2)\) and \(v = \alpha/(1+u^2)\). By symmetry, one fixed point is always at \(u^* = v^*\) satisfying \(u^* = \alpha/(1+(u^*)^2)\), or \((u^*)^3 + u^* = \alpha\). For \(\alpha = 3\): numerically, \(u^* \approx 1.21\). The stability of the symmetric point is determined by the Jacobian: \[ J = \begin{pmatrix} -1 & -\alpha_1 n v^{n-1}/(1+v^n)^2 \\ -\alpha_2 n u^{n-1}/(1+u^n)^2 & -1 \end{pmatrix}\bigg|_{(u^*,u^*)}. \] Setting \(\varepsilon = \alpha n (u^*)^{n-1}/(1+(u^*)^n)^2\), the eigenvalues are \(\lambda = -1 \pm \varepsilon\). The symmetric fixed point is stable if \(\varepsilon < 1\) and unstable (saddle) if \(\varepsilon > 1\). When \(\varepsilon > 1\), the symmetric state is unstable and two asymmetric stable states exist — this is the bistability condition. A larger Hill coefficient \(n\) or a larger \(\alpha\) increases \(\varepsilon\), making bistability more likely. This design principle — strong repression creates bistability — is a general feature of mutual-repression networks.
Remark: The toggle switch illustrates why bistability requires positive feedback. Two mutually repressing genes implement a double-negative feedback loop (repressor of repressor = activation), which is equivalent to positive feedback. The Poincaré–Bendixson theorem guarantees that in 2D, limit cycles cannot occur without a source (unstable fixed point), and the competition between the two stable states is resolved by the unstable saddle separatrix. The toggle switch has become a paradigm for understanding cellular memory, differentiation, and the epigenetic landscape.

Chapter 6: Gene Regulatory Networks

6.1 Transcription, Translation, and the Basic Gene Expression Model

Gene expression involves two main steps: transcription (synthesis of mRNA from a DNA template) and translation (synthesis of protein from the mRNA template). A minimal model treats mRNA concentration \(m\) and protein concentration \(p\) as dynamic variables:

\[ \frac{dm}{dt} = \beta_m - \gamma_m m, \qquad \frac{dp}{dt} = \beta_p m - \gamma_p p, \]

where \(\beta_m\) is the mRNA transcription rate, \(\gamma_m\) the mRNA degradation rate constant, \(\beta_p\) the translation rate constant (protein produced per mRNA per time), and \(\gamma_p\) the protein degradation rate constant. The steady-state protein level is:

\[ p^* = \frac{\beta_m \beta_p}{\gamma_m \gamma_p}. \]

The timescales are set by the half-lives: \(\tau_m = \ln 2 / \gamma_m\) and \(\tau_p = \ln 2 / \gamma_p\). In many bacteria, \(\tau_m \approx\) 2–5 minutes while \(\tau_p\) can be hours (protein is diluted by cell division as well as degraded). The separation of these timescales justifies a further quasi-steady-state approximation: setting \(m \approx \beta_m/\gamma_m\) gives an effective single-equation model for protein.

6.2 The Toggle Switch

The toggle switch (Gardner et al., Nature 2000) is a synthetic gene circuit comprising two mutually repressing genes. Gene A encodes a repressor of gene B, and gene B encodes a repressor of gene A. With Hill-function repression:

\[ \frac{dm_A}{dt} = \frac{\alpha_A}{1 + \left([P_B]/K_B\right)^n} - \gamma_{m_A} m_A, \]\[ \frac{dm_B}{dt} = \frac{\alpha_B}{1 + \left([P_A]/K_A\right)^n} - \gamma_{m_B} m_B. \]

Phase plane analysis reveals that for sufficiently strong repression (large Hill coefficient \(n\) or high repressor production), this network is bistable: either gene A is highly expressed and gene B suppressed, or vice versa. The two stable states correspond to distinct cellular phenotypes. Transient stimuli can flip the circuit from one state to the other. The toggle switch illustrates how bistability emerges from positive feedback (mutual repression creates a double-negative, equivalent to positive, feedback loop).

6.3 The Repressilator

The repressilator (Elowitz & Leibler, Nature 2000) is a three-gene synthetic circuit where gene A represses gene B, gene B represses gene C, and gene C represses gene A — forming a ring of three repressors. This negative feedback loop with an odd number of elements (here, three) generates sustained oscillations, as confirmed experimentally via GFP fluorescence.

The minimal protein-only model:

\[ \frac{dp_i}{dt} = \frac{\alpha}{1 + p_j^n} - \gamma p_i, \quad (i, j) \in \{(A,C), (B,A), (C,B)\}. \]

For large \(n\) (strong cooperativity), the system undergoes a Hopf bifurcation as the repression strength increases, giving rise to limit cycle oscillations. The repressilator demonstrates that oscillations do not require explicit positive feedback — they can arise from delayed negative feedback alone.

Example — Repressilator fixed point and its stability: By symmetry, the unique fixed point of the repressilator has all three protein levels equal: \(p_A^* = p_B^* = p_C^* = p^*\), where \(p^*\) satisfies \[ p^* = \frac{\alpha}{\gamma(1 + (p^*)^n)}. \] For \(\alpha = 20\), \(\gamma = 1\), \(n = 2\): \(p^*(1 + (p^*)^2) = 20\), i.e., \((p^*)^3 + p^* - 20 = 0\). Numerically, \(p^* \approx 2.49\). \[ \lambda_k = -\gamma + \frac{-\alpha n (p^*)^{n-1}}{(1+(p^*)^n)^2} \cdot e^{2\pi i k/3}, \quad k = 0, 1, 2. \]

The \(k=0\) mode is always stable (it corresponds to uniform perturbations, damped by the feedback). The \(k=1\) and \(k=2\) modes are complex conjugates of each other; they lose stability (their real part crosses zero) at the Hopf bifurcation. Setting \(\text{Re}(\lambda_1) = 0\) gives the condition for oscillations. For the parameters above, the system is near the bifurcation point, and the period of oscillation near the Hopf bifurcation is \(T \approx 2\pi/\text{Im}(\lambda_1) \approx 2\pi\sqrt{3}/\gamma \approx 10.9\) (in units of \(1/\gamma\)).

Remark (Design principles of biological oscillators): The repressilator reveals a key design principle: a ring of repressors of odd length generates oscillations. A ring of even length would not — mutual repression of two genes creates bistability (the toggle switch), not oscillations. More generally, the parity of the feedback loop (odd negative = oscillation, even negative = bistability, positive = bistability) is a topological invariant that determines qualitative behaviour independently of precise parameter values. This insight from synthetic biology provides a concise framework for classifying the dynamics of any regulatory network motif.

6.4 Noise in Gene Expression

In single cells, many proteins are present at low copy numbers (tens to hundreds of molecules), and biochemical reactions are inherently stochastic. Intrinsic noise arises from random fluctuations in the timing of individual transcription, translation, and degradation events — even genetically identical cells in identical environments will show cell-to-cell variability. Extrinsic noise arises from fluctuations in upstream components (transcription factors, ribosomes, polymerases) shared by many genes.

The Gillespie algorithm provides an exact method for simulating the stochastic dynamics of a well-mixed chemical system. At each step, two random numbers determine (i) the time to the next reaction event and (ii) which reaction fires, with probabilities proportional to the reaction propensities. For large molecule numbers, the stochastic simulations converge to the deterministic ODE predictions, but for low copy numbers, stochastic effects dominate.

For a simple gene expression model (birth–death process for protein), the stationary distribution of protein number is Poisson with mean \(\langle p \rangle = \beta/\gamma\) — intrinsic noise satisfies \(CV^2 = \sigma^2/\langle p\rangle^2 = 1/\langle p\rangle\), so noise is inversely proportional to mean protein number.

Example — Noise in a bursty gene expression model: Many genes are expressed in transcriptional bursts: the promoter switches stochastically between an active state (mRNA produced at rate \(k_{on}\)) and an inactive state. If burst size is geometrically distributed with mean \(b\) (proteins per burst) and burst frequency \(k_f\) (bursts per unit time), and protein degrades at rate \(\gamma\), then the mean and variance of protein number at steady state are: \[ \langle p \rangle = \frac{k_f b}{\gamma}, \qquad \text{Var}(p) = \langle p \rangle(1 + b). \] Therefore \(CV^2 = (1 + b)/\langle p \rangle\). For \(b = 10\) (mean 10 proteins per burst), \(\gamma = 0.01\,\text{min}^{-1}\), and \(k_f = 0.1\,\text{min}^{-1}\): \(\langle p \rangle = 0.1 \times 10/0.01 = 100\) proteins, \(CV^2 = 11/100 = 0.11\), so \(CV = 33\%\). The burst size \(b\) contributes an additive factor to the noise above the Poisson baseline (\(CV^2 = 1/\langle p\rangle\)), reflecting the shot noise from discrete burst events. This "super-Poissonian" noise is a hallmark of bursty expression and has been measured in bacteria, yeast, and mammalian cells by single-cell fluorescence microscopy.

6.5 Circadian Oscillators

Circadian oscillators are biochemical clocks that generate ~24-hour oscillations in cellular activity, enabling organisms to anticipate daily environmental cycles. The simplest mathematical model capturing the core mechanism is the Goodwin oscillator: a negative feedback loop with a single gene producing a protein that represses its own transcription, with multiple intermediate steps introducing sufficient delay.

The Drosophila circadian oscillator involves the Period (PER) and Timeless (TIM) proteins, which dimerize and translocate to the nucleus to repress their own transcription. A key kinetic feature is the multi-step phosphorylation of PER (catalysed by the kinase DBT), which introduces the delay needed for oscillations. The mathematical analysis involves finding the Hopf bifurcation boundary in parameter space as the number of phosphorylation steps and the repression strength are varied.


Chapter 7: Electrophysiology

7.1 Membrane Potential and Ion Channels

Neurons and muscle cells are electrically excitable: they can generate rapid, transient changes in their transmembrane voltage — action potentials — that propagate along cell membranes to transmit information. The transmembrane voltage \(V\) arises from differences in ion concentrations across the lipid bilayer, maintained by active transporters (ion pumps).

The electrochemical equilibrium potential for a single ion species is given by the Nernst equation:

\[ E_{ion} = \frac{RT}{zF} \ln\frac{[ion]_{out}}{[ion]_{in}}, \]

where \(R\) is the gas constant, \(T\) temperature (Kelvin), \(F\) the Faraday constant, and \(z\) the ion valence. For \(K^+\) at body temperature with typical physiological concentrations, \(E_K \approx -90\) mV; for \(Na^+\), \(E_{Na} \approx +60\) mV. The resting membrane potential of a typical neuron (~−70 mV) lies between these extremes because multiple ion species contribute, weighted by their respective membrane permeabilities. The Goldman equation accounts for this:

\[ V_{rest} = \frac{RT}{F} \ln\frac{P_K[K^+]_{out} + P_{Na}[Na^+]_{out} + P_{Cl}[Cl^-]_{in}}{P_K[K^+]_{in} + P_{Na}[Na^+]_{in} + P_{Cl}[Cl^-]_{out}}, \]

where \(P_X\) denotes the membrane permeability to ion species \(X\). Ion flow across the membrane is mediated by ion channels — protein pores that open and close in response to voltage, ligands, or mechanical stimuli.

7.2 The Hodgkin–Huxley Model

The Hodgkin–Huxley (HH) model (1952, Nobel Prize 1963) is the quantitative foundation of modern computational neuroscience. Alan Hodgkin and Andrew Huxley developed the model from voltage-clamp experiments on the squid giant axon. The membrane is modelled as a capacitor in parallel with conductance elements for each ion species:

\[ C_m \frac{dV}{dt} = I_{app} - I_{Na} - I_K - I_L, \]

where \(C_m\) is the membrane capacitance, \(I_{app}\) an applied current, and the ionic currents are:

\[ I_{Na} = g_{Na} m^3 h (V - E_{Na}), \]\[ I_K = g_K n^4 (V - E_K), \]\[ I_L = g_L (V - E_L). \]

Here \(g_{Na}\), \(g_K\), \(g_L\) are maximal conductances, and \(m\), \(h\), \(n\) are gating variables between 0 and 1 representing the probabilities that individual channel gates are open. Each gating variable obeys a first-order kinetic equation:

\[ \frac{dw}{dt} = \frac{w_\infty(V) - w}{\tau_w(V)}, \quad w \in \{m, h, n\}, \]

where \(w_\infty(V)\) is the voltage-dependent steady-state gate value and \(\tau_w(V)\) is the voltage-dependent time constant, both determined from experimental data.

Definition (Gating Variables): The gating variable m represents fast Na+ channel activation (activates rapidly on depolarisation), h represents slow Na+ channel inactivation (inactivates slowly after activation), and n represents K+ channel activation (activates slowly on depolarisation). The Na+ channel requires three m gates and one h gate to be open simultaneously (hence m3h), and the K+ channel requires four n gates (hence n4).

The action potential mechanism in the HH model: A sufficiently large depolarisation (membrane potential increasing from rest) rapidly opens Na\(^+\) channels (fast \(m\) activation), causing an inward Na\(^+\) current that further depolarises the membrane — a positive feedback loop driving the rapid upstroke. The rising voltage simultaneously begins to inactivate Na\(^+\) channels (slow \(h\) inactivation) and activate K\(^+\) channels (slow \(n\) activation), both of which drive repolarisation back toward rest. The refractory period following an action potential arises because the \(h\) gates remain inactivated for a time, preventing immediate re-firing.

7.3 The FitzHugh–Nagumo Model

The full 4-dimensional HH model is difficult to analyse analytically. The FitzHugh–Nagumo (FHN) model is a 2-dimensional caricature that retains the essential excitability:

\[ \frac{dv}{dt} = v - \frac{v^3}{3} - w + I_{app}, \]\[ \frac{dw}{dt} = \varepsilon(v + a - bw), \]

where \(v\) is a fast voltage-like variable, \(w\) is a slow recovery variable, \(I_{app}\) is the applied current, and \(\varepsilon \ll 1\) enforces the time-scale separation.

The \(v\)-nullcline is a cubic \(w = v - v^3/3 + I_{app}\); the \(w\)-nullcline is the line \(w = (v + a)/b\). Their intersection defines the fixed point. For small \(I_{app}\), the system is at a stable resting state (excitable: a sufficiently large perturbation initiates a large excursion before returning to rest). As \(I_{app}\) increases, a subcritical Hopf bifurcation occurs and the fixed point becomes unstable, giving repetitive firing (limit cycle). The phase plane of FHN is analytically tractable and reveals the threshold separatrix, refractory period, and oscillatory regime in a transparent geometric way.

7.4 FitzHugh-Nagumo: A Worked Bifurcation Analysis

Example — FitzHugh-Nagumo fixed point and Hopf bifurcation: Take standard parameters \(a = 0.7\), \(b = 0.8\), \(\varepsilon = 0.08\). The fixed point \((v^*, w^*)\) satisfies: \[ v^* - (v^*)^3/3 - w^* + I_{app} = 0, \qquad v^* + a - bw^* = 0. \]

From the second equation: \(w^* = (v^* + a)/b = (v^* + 0.7)/0.8\). Substituting into the first and expanding gives a cubic in \(v^*\); for \(I_{app} = 0\), numerical solution gives \(v^* \approx -1.20\), \(w^* \approx -0.625\).

\[ J = \begin{pmatrix} 1 - (v^*)^2 & -1 \\ \varepsilon & -\varepsilon b \end{pmatrix} = \begin{pmatrix} 1 - 1.44 & -1 \\ 0.08 & -0.064 \end{pmatrix} = \begin{pmatrix} -0.44 & -1 \\ 0.08 & -0.064 \end{pmatrix}. \]

Trace: \(\tau = -0.44 - 0.064 = -0.504 < 0\). Determinant: \(\Delta = (-0.44)(-0.064) - (-1)(0.08) = 0.0282 + 0.08 = 0.108 > 0\). The fixed point is a stable spiral (resting state). As \(I_{app}\) increases, \(v^*\) shifts and the trace \(\tau\) increases. The Hopf bifurcation occurs when \(\tau = 0\), i.e., when \(1-(v^*)^2 = \varepsilon b = 0.064\), giving \(v^*_c = \pm\sqrt{0.936} \approx \pm 0.968\). The positive root corresponds to the physiologically relevant depolarised state; at this bifurcation point, repetitive firing (limit cycle) is born.

Remark: This analysis exemplifies how the Hopf bifurcation criterion (\(\text{tr}(J) = 0\), \(\det(J) > 0\)) provides a clean analytical prediction for the onset of oscillations without solving the full nonlinear system. In experimental neuroscience, one can measure the applied current \(I_{app}\) at which a neuron begins firing rhythmically — this is the empirical Hopf bifurcation point — and compare it to the theoretical prediction from the linearised model. The excellent agreement between theory and experiment in the HH and FHN models is one of the great successes of quantitative biology.

7.5 Bursting Neurons

Many neurons do not fire regular spike trains but produce bursts — clusters of action potentials separated by quiescent periods. Bursting can be understood through slow-fast analysis: the fast subsystem (voltage and fast gates) is HH-like, while a slow variable (e.g., slow K\(^+\) current or intracellular Ca\(^{2+}\)) modulates the fast subsystem’s bifurcation parameter on a slower time scale.

As the slow variable evolves, it drives the fast subsystem through a bifurcation: when the fast subsystem is in its oscillatory regime, spikes fire (the burst); when the slow variable has accumulated enough to push the fast subsystem back to quiescence (via a fold bifurcation or homoclinic bifurcation), firing stops until the slow variable recovers and the next burst begins. This geometrical decomposition of slow-fast dynamics is central to the computational analysis of neuronal diversity.

7.6 Synaptic Transmission

Neurons communicate at synapses. An action potential in the presynaptic neuron causes Ca\(^{2+}\)-mediated release of neurotransmitter into the synaptic cleft. Neurotransmitter binds postsynaptic receptors, opening ion channels and generating a postsynaptic potential: an EPSP (excitatory postsynaptic potential) if the reversal potential drives depolarisation, or an IPSP (inhibitory postsynaptic potential) if it drives hyperpolarisation.

The simplest model of a postsynaptic neuron is the integrate-and-fire model:

\[ C_m \frac{dV}{dt} = -g_L(V - E_L) + I_{syn}(t), \]

with the rule that whenever \(V\) reaches a threshold \(V_{th}\), a spike is recorded and \(V\) is reset to \(V_{reset}\). Despite its simplicity, the integrate-and-fire model captures many features of neural coding and is widely used in network-scale simulations.


Chapter 8: Bifurcation Theory — Normal Forms and Biological Switching

8.1 Motivation: How Biological Systems Change Behaviour

Many of the most striking phenomena in cell biology — cell differentiation, the onset of oscillations in a circadian clock, the transition from interphase to mitosis — are not gradual changes but abrupt qualitative transitions. A system that was monostable suddenly becomes bistable; a steady state that was attracting suddenly becomes repelling and a limit cycle appears. Bifurcation theory is the mathematical study of these qualitative transitions: it catalogues the distinct ways that the phase portrait of a dynamical system can change as a parameter varies, and provides normal forms — minimal canonical models — that capture the universal behaviour near each transition.

The central insight is that near a bifurcation point, all systems of the same codimension and type look identical after an appropriate change of coordinates. This means that a few universal normal forms capture an enormous diversity of biological phenomena, and that qualitative predictions (the system will oscillate, or will be bistable) can be made on topological grounds without knowing precise parameter values.

8.2 Saddle-Node Bifurcation

The saddle-node bifurcation is the simplest and most ubiquitous bifurcation. It describes the creation or annihilation of a pair of fixed points — one stable, one unstable — as a parameter varies.

Definition (Saddle-Node Bifurcation): A one-dimensional ODE \(\dot{x} = f(x, \mu)\) undergoes a saddle-node bifurcation at \((x_0, \mu_0)\) if:
  1. \(f(x_0, \mu_0) = 0\) (the point is a fixed point at the bifurcation).
  2. \(f_x(x_0, \mu_0) = 0\) (the fixed point is non-hyperbolic).
  3. \(f_{\mu}(x_0, \mu_0) \neq 0\) (the parameter truly moves the vector field).
  4. \(f_{xx}(x_0, \mu_0) \neq 0\) (the quadratic term is non-degenerate).
The normal form is \[ \dot{x} = \mu - x^2. \] For \(\mu < 0\): no fixed points. For \(\mu = 0\): one semi-stable fixed point at \(x = 0\). For \(\mu > 0\): two fixed points, \(x_{\pm} = \pm\sqrt{\mu}\), with \(x_+ = +\sqrt{\mu}\) unstable and \(x_- = -\sqrt{\mu}\) stable.

The bifurcation diagram for the normal form \(\dot{x} = \mu - x^2\) has a characteristic parabolic shape in the \((\mu, x)\) plane: the curve of fixed points is \(x = \pm\sqrt{\mu}\), existing only for \(\mu \geq 0\). The upper branch (\(x_+ > 0\)) is unstable (filled circles in a bifurcation diagram) and the lower branch (\(x_- < 0\)) is stable (open circles). At \(\mu = 0\), the two branches collide and annihilate — a fold in the bifurcation curve. Moving the parameter from \(\mu > 0\) to \(\mu < 0\) causes the stable steady state to disappear suddenly, potentially triggering a large-amplitude excursion to a distant attractor. This catastrophic jump is the mathematical signature of a tipping point.

Example — Saddle-node bifurcation in a gene expression switch: Consider a self-activating gene where protein \(x\) activates its own transcription via a Hill function, balanced by linear degradation: \[ \dot{x} = \frac{\mu x^2}{K^2 + x^2} - \gamma x, \] with \(\mu > 0\) the maximal self-activation rate, \(K = 1\,\mu\text{M}\) the half-saturation constant, \(\gamma = 1\,\text{min}^{-1}\) the degradation rate. Fixed points satisfy \(\mu x/(K^2 + x^2) = \gamma\), i.e., \(\mu x = \gamma(K^2 + x^2)\) (dividing by \(x > 0\)). \[ x_{\pm} = \frac{\mu \pm \sqrt{\mu^2 - 4\gamma^2 K^2}}{2\gamma}. \]

Real solutions exist only when the discriminant is non-negative: \(\mu^2 \geq 4\gamma^2 K^2\), i.e., \(\mu \geq 2\gamma K = 2 \times 1 \times 1 = 2\,\mu\text{M/min}\). At the critical parameter \(\mu_c = 2\,\mu\text{M/min}\), the two fixed points collide at \(x_c = \mu_c/(2\gamma) = 1\,\mu\text{M} = K\).

\[ x_{\pm} = \frac{3 \pm \sqrt{9 - 4}}{2} = \frac{3 \pm \sqrt{5}}{2}. \]

So \(x_+ = (3 + 2.236)/2 \approx 2.618\,\mu\text{M}\) (unstable) and \(x_- = (3 - 2.236)/2 \approx 0.382\,\mu\text{M}\) (stable). The system also has a stable fixed point at \(x = 0\) (trivial off-state). This three-fixed-point configuration (\(x = 0\) stable, \(x_-\) stable, \(x_+\) unstable saddle) is the bistable regime of a self-activating switch, entered via a saddle-node bifurcation at \(\mu = \mu_c\).

Biological interpretation: the “on” state at \(x \approx 0.382\,\mu\text{M}\) and the “off” state at \(x = 0\) are two stable gene expression levels. A brief pulse of external signal that transiently increases \(x\) above the saddle at \(x \approx 2.618\,\mu\text{M}\) would push the system into the high-expression basin — but wait, here \(x_-\) is the lower stable state and \(x_+\) is the upper unstable, so the actual high-on state requires a more complex topology. In real bistable gene circuits (e.g., the \(CI\) repressor network in \(\lambda\) phage), the topology is analogous and the saddle-node bifurcation marks the boundary of bistability.

8.3 Pitchfork Bifurcation

When a system has a symmetry (such as \(x \to -x\)), saddle-node bifurcations occur in symmetric pairs and the resulting bifurcation is the pitchfork. This is the archetypal bifurcation for systems with a symmetry that can be spontaneously broken.

Definition (Pitchfork Bifurcation): The supercritical pitchfork bifurcation has normal form \[ \dot{x} = \mu x - x^3. \] For \(\mu \leq 0\): the only fixed point is \(x^* = 0\), which is stable for \(\mu < 0\). For \(\mu > 0\): \(x^* = 0\) becomes unstable, and two new stable fixed points appear at \(x^* = \pm\sqrt{\mu}\). The bifurcation occurs at \(\mu = 0\).

The subcritical pitchfork bifurcation has normal form \(\dot{x} = \mu x + x^3\). For \(\mu < 0\): three fixed points — \(x^* = 0\) (stable) and \(x^* = \pm\sqrt{-\mu}\) (unstable). At \(\mu = 0\), the unstable pair collides with the stable origin and the origin becomes unstable for \(\mu > 0\).

The supercritical pitchfork models spontaneous symmetry breaking: for \(\mu < 0\), the system rests at the symmetric state \(x = 0\); as \(\mu\) increases through 0, the symmetric state becomes unstable and the system settles into one of the two symmetry-broken states \(\pm\sqrt{\mu}\). Which one it chooses is determined by noise or initial conditions.

Example — Supercritical pitchfork: numerical bifurcation diagram: For \(\dot{x} = \mu x - x^3\), compute the fixed points and their stability for several values of \(\mu\): \[ \begin{array}{c|c|c} \mu & \text{Fixed points} & \text{Stability} \\ \hline -2 & x^* = 0 & \text{Stable (eigenvalue } -2\text{)} \\ -1 & x^* = 0 & \text{Stable (eigenvalue } -1\text{)} \\ 0 & x^* = 0 & \text{Marginally stable} \\ 1 & x^* = 0,\; x^* = \pm 1 & x=0 \text{ unstable; } \pm 1 \text{ stable} \\ 4 & x^* = 0,\; x^* = \pm 2 & x=0 \text{ unstable; } \pm 2 \text{ stable} \end{array} \]

Stability check at \(\mu = 1\): The Jacobian \(f_x = \mu - 3x^2\). At \(x=0\): \(f_x = 1 > 0\) (unstable). At \(x = \pm 1\): \(f_x = 1 - 3 = -2 < 0\) (stable). At \(\mu = 4\): at \(x = \pm 2\): \(f_x = 4 - 3(4) = -8 < 0\) (stable).

The bifurcation diagram traces a pitchfork shape in the \((\mu, x^*)\) plane: a single branch \(x^* = 0\) for all \(\mu\) (solid for \(\mu < 0\), dashed for \(\mu > 0\)) flanked by two branches \(x^* = \pm\sqrt{\mu}\) (solid, appearing for \(\mu > 0\)). This Y-shape (the pitchfork) is the characteristic signature of symmetry breaking.

Biological context: The pitchfork bifurcation models symmetry-breaking in development. A cell initially in a symmetric state (\(x = 0\)) can spontaneously break symmetry as a morphogen concentration (playing the role of \(\mu\)) crosses a threshold, committing to one of two differentiation fates (\(x = +\sqrt{\mu}\) or \(x = -\sqrt{\mu}\)). In the Turing model of pattern formation, symmetry breaking of a spatially homogeneous state into a spatially patterned one follows a similar pitchfork structure in the Fourier mode amplitude.

8.4 Hopf Bifurcation — Normal Form and Worked Example

The Hopf bifurcation governs the birth of limit cycle oscillations from a fixed point. The normal form in polar coordinates \((r, \theta)\) for the supercritical case is:

\[ \dot{r} = \mu r - r^3, \qquad \dot{\theta} = \omega, \]

where \(r \geq 0\) is the amplitude of oscillation and \(\omega > 0\) is the angular frequency. For \(\mu < 0\), the only attractor is the fixed point \(r = 0\) (the origin, a stable spiral). For \(\mu > 0\), the fixed point \(r = 0\) is unstable and a stable limit cycle appears at \(r^* = \sqrt{\mu}\) with angular frequency \(\omega\). At \(\mu = 0\), the eigenvalues of the linearisation are exactly \(\pm i\omega\).

Example — Hopf bifurcation in the Brusselator: From the earlier Brusselator analysis, recall the steady state \((x^*, y^*) = (1, b/a)\) with Jacobian \[ J = \begin{pmatrix} b-1 & a \\ -b & -a \end{pmatrix}. \] The trace and determinant are \(\tau = b - 1 - a\) and \(\Delta = a > 0\). The eigenvalues are \(\lambda = (\tau \pm \sqrt{\tau^2 - 4\Delta})/2\). The Hopf bifurcation occurs when \(\tau = 0\), i.e., when \(b = b_c := a + 1\).

Numerical case \(a = 2\): The Hopf bifurcation occurs at \(b_c = 3\). For \(b < 3\): the steady state is stable (trajectories spiral in). For \(b > 3\): the steady state is unstable and a stable limit cycle surrounds it.

Frequency at the bifurcation point: At \(b = b_c\), \(\tau = 0\) and \(\Delta = a = 2\), so the eigenvalues are \(\lambda = \pm i\sqrt{\Delta} = \pm i\sqrt{2}\). The oscillation frequency is \(\omega = \sqrt{2}\), giving a period \(T = 2\pi/\sqrt{2} = \pi\sqrt{2} \approx 4.44\) (in dimensionless time units).

Limit cycle amplitude: For \(b\) slightly above \(b_c\), the normal form theory predicts that the limit cycle amplitude scales as \(r \propto \sqrt{b - b_c}\). At \(b = 3.5\) (i.e., \(b - b_c = 0.5\)): the amplitude is approximately \(r \approx \sqrt{0.5} \approx 0.71\) in the coordinates of the normal form. This square-root scaling is a universal signature of a supercritical Hopf bifurcation.

The Brusselator limit cycle corresponds biologically to a sustained oscillation in the concentrations of the two chemical species — a chemical clock. In biological systems, the Hopf mechanism similarly underlies the emergence of oscillatory gene expression (circadian clocks, the NF-\(\kappa\)B oscillator, cell cycle protein oscillations).

Remark (Subcritical vs. supercritical Hopf bifurcations in biology): In a supercritical Hopf bifurcation, the limit cycle amplitude grows continuously from zero as the parameter crosses the critical value — the onset of oscillations is gradual and reversible. In a subcritical Hopf bifurcation, a large-amplitude unstable limit cycle already exists for \(\mu < \mu_c\), and at \(\mu = \mu_c\) this unstable limit cycle collides with the stable fixed point. The result is an abrupt, potentially hysteretic onset of oscillations. Many biological oscillators (including some cardiac pacemaker cells and certain neural systems) are subcritical: a small perturbation can trigger a large-amplitude oscillation even when the steady state is nominally stable, a phenomenon called excitability. Distinguishing supercritical from subcritical Hopf bifurcations experimentally requires perturbing the system near the bifurcation and observing whether oscillations grow continuously or jump discontinuously.

Chapter 9: Delay Differential Equations

9.1 Motivation: Time Delays in Biology

Many biological processes involve inherent time delays that cannot be adequately captured by ordinary differential equations. Gene regulation is a canonical example: after a transcription factor binds a promoter, a cascade of events must occur — RNA polymerase recruitment, transcription, mRNA processing, nuclear export, ribosome assembly, and translation — before the encoded protein appears in the cytoplasm. This total delay \(\tau\) can be tens of minutes. If the protein product then feeds back to repress its own gene, the time-delayed negative feedback can generate oscillations that a corresponding ODE model (without delay) would not predict.

Other examples include: immune response delays (cytokine production after pathogen recognition takes hours), population dynamics (gestation periods in predator-prey models), and pharmacokinetics (drug absorption and distribution times). The mathematical framework for these systems is the delay differential equation (DDE).

Definition (Delay Differential Equation): A delay differential equation of retarded type with constant delay \(\tau > 0\) is an equation of the form \[ \dot{x}(t) = f(x(t),\, x(t - \tau)), \] where the rate of change at time \(t\) depends on the state at the earlier time \(t - \tau\). Unlike ODEs, DDEs require specification of the initial history function \(\phi: [-\tau, 0] \to \mathbb{R}\) rather than a single initial condition. The solution is then determined for \(t > 0\) by integrating forward using the known history. The state space of a DDE is infinite-dimensional (a function space), even for a scalar equation.

9.2 The Simplest DDE: Linear Stability Analysis

The canonical scalar DDE with negative feedback is:

\[ \dot{x}(t) = -k\, x(t - \tau), \]

where \(k > 0\) is the feedback gain and \(\tau > 0\) is the time delay. The unique steady state is \(x^* = 0\). To analyse stability, we seek solutions of the form \(x(t) = e^{\lambda t}\) (a characteristic solution). Substituting:

\[ \lambda e^{\lambda t} = -k e^{\lambda(t-\tau)} = -k e^{-\lambda\tau} e^{\lambda t}. \]

Dividing by \(e^{\lambda t} \neq 0\) gives the characteristic equation:

\[ \lambda = -k e^{-\lambda\tau}. \]

This transcendental equation has infinitely many complex roots \(\lambda_n\), in contrast to a polynomial characteristic equation from an ODE. For stability, we need all roots to satisfy \(\text{Re}(\lambda) < 0\).

Theorem (Stability of \(\dot{x} = -kx(t-\tau)\)): The zero steady state of \(\dot{x}(t) = -k\,x(t-\tau)\) with \(k > 0\) is:
  • Asymptotically stable if \(k\tau < \pi/2\).
  • Unstable (oscillatory) if \(k\tau > \pi/2\).
  • At the stability boundary \(k\tau = \pi/2\), the characteristic equation has purely imaginary roots \(\lambda = \pm ik\), and the solution is neutrally stable with oscillation frequency \(\omega = k\).
Proof (Sketch): Seek the stability boundary where \(\lambda = i\omega\) (purely imaginary). Substituting into the characteristic equation: \(i\omega = -ke^{-i\omega\tau} = -k(\cos\omega\tau - i\sin\omega\tau)\). Separating real and imaginary parts: real part gives \(0 = -k\cos\omega\tau\), so \(\omega\tau = \pi/2\) (taking the smallest positive solution). Imaginary part gives \(\omega = k\sin(\pi/2) = k\). Therefore, the critical delay is \(\tau_c = \pi/(2k)\), or equivalently \(k\tau_c = \pi/2\).
Example — DDE oscillations from time delay in a gene circuit: Consider a model of a self-repressing gene where protein \(p(t)\) represses its own production after a delay \(\tau\) (representing transcription, splicing, and translation time): \[ \dot{p}(t) = \frac{\beta}{1 + (p(t-\tau)/K)^n} - \gamma p(t). \] This is the delayed Goodwin oscillator. The unique positive steady state satisfies \(p^* = \beta/(\gamma(1 + (p^*/K)^n))\). \[ \dot{\delta}(t) = -A\,\delta(t - \tau) - \gamma\,\delta(t), \]\[ i\omega + \gamma + A(\cos\omega\tau - i\sin\omega\tau) = 0. \]

Real part: \(\gamma + A\cos\omega\tau = 0 \Rightarrow \cos\omega\tau = -\gamma/A\). Imaginary part: \(\omega - A\sin\omega\tau = 0 \Rightarrow \sin\omega\tau = \omega/A\). Adding squares: \(\gamma^2 + \omega^2 = A^2\), so \(\omega_c = \sqrt{A^2 - \gamma^2}\) (real only if \(A > \gamma\)).

Numerical case: Let \(\beta = 10\,\mu\text{M/min}\), \(K = 2\,\mu\text{M}\), \(n = 4\), \(\gamma = 0.5\,\text{min}^{-1}\). At the steady state \(p^* \approx 1.5\,\mu\text{M}\) (found numerically), we compute \(A \approx 1.8\,\text{min}^{-1}\). Since \(A > \gamma\), the critical frequency is \(\omega_c = \sqrt{1.8^2 - 0.5^2} = \sqrt{3.24 - 0.25} = \sqrt{2.99} \approx 1.73\,\text{min}^{-1}\). The critical delay is \(\tau_c = (1/\omega_c)\arccos(-\gamma/A) = (1/1.73)\arccos(-0.278) \approx (1/1.73)(1.852) \approx 1.07\,\text{min}\). For \(\tau > \tau_c \approx 1.07\,\text{min}\), sustained oscillations emerge. The oscillation period at the bifurcation is \(T = 2\pi/\omega_c = 2\pi/1.73 \approx 3.63\,\text{min}\).

Biological interpretation: In mammalian cells, the Hes1 transcription factor represses its own transcription with a time delay (due to nuclear export, translation, and reimport) of approximately 10–30 minutes, generating oscillations with period ~2 hours. The DDE framework correctly captures this mechanism: the oscillation period is set by the delay, not by any intrinsic biochemical rate constant, explaining why the period is robust to parameter variation and primarily determined by the transport time.

Remark (DDEs vs. ODEs with extra variables): A DDE with delay \(\tau\) can sometimes be approximated by a chain of \(n\) ODEs using the "linear chain trick": replace the delay by \(n\) sequential first-order intermediates, each with time constant \(\tau/n\). As \(n \to \infty\), the chain approaches a perfect delay. This is useful numerically and also explains why the Goodwin oscillator with many intermediate steps behaves like a delayed negative feedback loop. However, for a fixed finite \(n\), the chain approximation has different stability properties than the true DDE: the true DDE can oscillate for any \(n = 1\) (one variable), while the chain ODE requires \(n \geq 3\) for oscillations. The DDE is therefore the more parsimonious model when a genuine time delay is known to exist.

Chapter 10: Stochastic Models and Intrinsic Noise

10.1 Why Stochasticity Matters in Biology

The deterministic ODE framework assumes that molecular concentrations are continuous variables evolving smoothly. In reality, molecules are discrete entities, and at the low copy numbers typical of gene regulatory proteins (tens to hundreds of molecules per cell), individual molecular events — a single transcription factor binding its promoter, a single mRNA being translated — can cause significant fluctuations. These fluctuations are intrinsic noise: they arise from the stochastic nature of chemical reactions even in a perfectly controlled environment.

Extrinsic noise, by contrast, arises from cell-to-cell variability in the concentrations of upstream regulators (RNA polymerases, ribosomes, signalling proteins). Both sources of noise are observable in single-cell experiments (fluorescence microscopy of individual cells) and contribute to the heterogeneity of cell populations — even genetically identical cells in identical conditions exhibit different protein levels.

Stochastic models are essential when: (1) copy numbers are low, (2) the system is near a bifurcation point (where noise can trigger transitions between attractors), (3) the timing of rare events (e.g., cell division, viral integration) matters, or (4) the qualitative behaviour of a deterministic model is noise-sensitive (e.g., bistability with a low barrier between states).

10.2 The Langevin Equation

The Langevin equation provides a bridge between deterministic and stochastic dynamics by adding a noise term to the ODE:

Definition (Langevin Equation): The Langevin equation for a scalar state \(x(t)\) is \[ \dot{x}(t) = f(x(t)) + g(x(t))\,\xi(t), \] where \(f(x)\) is the deterministic drift, \(g(x)\) is the noise amplitude (which may depend on state), and \(\xi(t)\) is Gaussian white noise with mean zero and correlation \(\langle \xi(t)\xi(t') \rangle = \delta(t-t')\). The term \(g(x)\,\xi(t)\) represents the random fluctuations due to molecular discreteness. When \(g(x) = \sqrt{D}\) is constant (state-independent noise), this is additive noise; when \(g(x)\) depends on \(x\), this is multiplicative noise.
\[ \dot{x} = \beta - \gamma x + \sqrt{\beta + \gamma x}\,\xi(t). \]

The noise amplitude \(\sqrt{\beta + \gamma x}\) is the square root of the total propensity (a consequence of the Poisson statistics of individual reaction events in the linear noise approximation).

10.3 The Fokker-Planck Equation

Rather than tracking individual stochastic trajectories, one can describe the evolution of the probability distribution \(P(x, t)\) — the probability density of finding the system at state \(x\) at time \(t\).

Definition (Fokker-Planck Equation): For a Langevin process \(\dot{x} = f(x) + g(x)\xi(t)\), the time evolution of the probability density \(P(x, t)\) is governed by the Fokker-Planck equation: \[ \frac{\partial P}{\partial t} = -\frac{\partial}{\partial x}\left[f(x)\,P\right] + \frac{1}{2}\frac{\partial^2}{\partial x^2}\left[D(x)\,P\right], \] where \(D(x) = g(x)^2\) is the local diffusion coefficient. The first term is the drift term (deterministic flow of probability), and the second is the diffusion term (spreading of probability due to noise). The stationary distribution \(P_{stat}(x)\) satisfies \(\partial P_{stat}/\partial t = 0\).
\[ P_{stat}(x) \propto \exp\!\left(-\frac{2U(x)}{D}\right). \]

This is a Boltzmann-like distribution: the system is most likely to be found near the minima of the potential \(U(x)\) (stable fixed points), with probability exponentially decreasing toward the maxima (unstable fixed points). The ratio of the noise strength \(D\) to the potential barrier height determines the rate of noise-driven transitions between stable states.

10.4 Intrinsic vs. Extrinsic Noise in Gene Expression

Definition (Fano Factor): For a random variable \(X\) (such as protein copy number), the Fano factor is \[ F = \frac{\text{Var}(X)}{\langle X \rangle}. \] For a Poisson distribution, \(F = 1\) (since \(\text{Var} = \langle X \rangle\) for Poisson). A Fano factor \(F > 1\) indicates super-Poissonian or bursty statistics; \(F < 1\) indicates sub-Poissonian statistics (less variable than Poisson, achievable only in quantum systems or systems with active noise suppression).
Example — Fano factor for a simple birth-death process: Consider a protein produced at constant rate \(\beta\) (molecules/min) and degraded at rate \(\gamma\) per molecule. The stochastic dynamics are a birth-death process with birth propensity \(\beta\) and death propensity \(\gamma n\) when \(n\) molecules are present. The master equation for the probability \(P(n, t)\) is: \[ \frac{dP(n,t)}{dt} = \beta P(n-1,t) - \beta P(n,t) + \gamma(n+1)P(n+1,t) - \gamma n P(n,t). \] At steady state, this has an exact Poisson solution: \(P_{stat}(n) = e^{-\mu}\mu^n/n!\) with mean \(\mu = \beta/\gamma\). Therefore: \[ \langle n \rangle = \frac{\beta}{\gamma}, \quad \text{Var}(n) = \frac{\beta}{\gamma} = \langle n\rangle, \quad F = \frac{\text{Var}(n)}{\langle n\rangle} = 1. \]

Numerical example: If \(\beta = 5\) molecules/min and \(\gamma = 0.1\,\text{min}^{-1}\), then the mean protein level is \(\langle n \rangle = 5/0.1 = 50\) molecules, the variance is also 50, and the Fano factor is \(F = 1\). The coefficient of variation is \(CV = \sigma/\langle n\rangle = \sqrt{50}/50 = 1/\sqrt{50} \approx 0.141\), or 14.1% noise.

Comparison with bursty expression: If instead the protein is produced in bursts of mean size \(b = 5\) molecules per burst, with burst frequency \(k_b = 1\,\text{min}^{-1}\) and \(\gamma = 0.1\,\text{min}^{-1}\), the mean is unchanged: \(\langle n \rangle = k_b b / \gamma = 1\times 5/0.1 = 50\). But the variance is now \(\text{Var}(n) = \langle n\rangle(1 + b) = 50 \times 6 = 300\), giving a Fano factor \(F = 300/50 = 6\). The bursty gene is 6 times noisier (in units of mean protein) than the Poisson baseline. This super-Poissonian noise (\(F = b + 1\)) is experimentally distinguishable from Poisson noise by measuring both mean and variance across single cells.

10.5 The Two-State Promoter Model

A widely used stochastic model of gene expression is the two-state promoter (also called the telegraph model). The promoter switches between an active state (ON) and an inactive state (OFF):

\[ \text{OFF} \underset{k_{off}}{\stackrel{k_{on}}{\rightleftharpoons}} \text{ON} \xrightarrow{\beta} \text{mRNA} \xrightarrow{\gamma} \emptyset. \]

When the promoter is in the ON state, mRNA is produced at rate \(\beta\); in the OFF state, production is zero. mRNA degrades at rate \(\gamma\). This model accounts for bursty transcription: when \(k_{off} \gg k_{on}\) (the promoter spends most time OFF but occasionally fires), mRNA is produced in bursts, each burst lasting an exponentially distributed ON period of mean duration \(1/k_{off}\) and containing a geometrically distributed number of transcripts with mean \(\beta/k_{off}\).

Remark (Experimental measurement of noise in gene expression): The two-state promoter model was decisively validated by single-molecule fluorescence experiments in bacteria (Golding et al., Cell 2005), which showed that individual mRNA molecules appear in discrete bursts with inter-burst times that are approximately exponentially distributed. The burst size distribution was found to be approximately geometric, consistent with the model. These experiments established that transcriptional bursting is a fundamental feature of gene expression in bacteria and, subsequently, in eukaryotes. The Fano factor of mRNA copy number provides a direct experimental readout of the effective burst size \(b = \beta/k_{off}\): a Fano factor of 10 implies that mRNA is produced in bursts of average size 10. This connection between the measurable statistical quantity (Fano factor) and the molecular mechanism (burst size) is one of the key results of stochastic gene expression theory and motivates the precise single-cell measurements now routine in systems biology.

Chapter 11: Cell Cycle Models

11.1 The Cell Cycle as a Biochemical Switch

The eukaryotic cell cycle — the ordered sequence of events by which a cell duplicates its genome (S phase) and divides (M phase), interspersed with gap phases (G1 and G2) — is controlled by a network of cyclin-dependent kinases (CDKs) and their regulatory partners. A central question for mathematical biology is: what ensures that the cell cycle is unidirectional and irreversible? Why does a cell committed to mitosis not spontaneously revert to interphase?

The answer lies in bistability. The transition from G1 to S phase (the “Start” transition in yeast) and from G2 to M phase (the “mitotic entry” transition) are both governed by bistable switches with hysteresis. Once the switch has been thrown in one direction, the cell remains committed even if the triggering signal is reduced — a property essential for robust cell cycle progression.

11.2 The Tyson–Novak Model: MPF Activation

The Tyson–Novak model (Tyson, 1991; Novak & Tyson, 1993) describes the mutual inhibition network controlling MPF (Maturation Promoting Factor), the complex of cyclin B and CDK1 that drives mitotic entry. The core circuit involves:

  • Cyclin B (CycB) is synthesised constitutively and degraded in a CycB-dependent manner.
  • CDK1 is always present but held inactive by phosphorylation on Tyr15 (by Wee1 kinase) and activated by dephosphorylation (by Cdc25 phosphatase).
  • MPF = CycB/CDK1 complex. Active MPF activates Cdc25 (positive feedback) and inhibits Wee1 (double-negative = positive feedback).
  • APC/C:Cdc20 (the Anaphase Promoting Complex) degrades cyclin B, providing negative feedback. Its activation requires MPF and additional signals.

The double-positive feedback loop (MPF activates Cdc25, which activates MPF; MPF inhibits Wee1, which normally inhibits MPF) creates a bistable switch.

Definition (Mutual Activation Switch): A mutual activation switch arises when two proteins A and B each activate the other through nonlinear (ultrasensitive) responses, creating a positive feedback loop with two stable states: (A high, B high) and (A low, B low). In the cell cycle context, MPF and Cdc25 form such a mutual activation system, while MPF and Wee1 form a mutual inhibition system (equivalent to mutual activation via double-negative logic).

11.3 Simplified Bistable MPF Model

A minimal model for the MPF switch uses a single variable \(m\) representing the fraction of CDK1 in the active (dephosphorylated) form, controlled by the balance of Cdc25 activation and Wee1 inactivation. The rate of CDK1 activation by Cdc25 is an increasing, ultrasensitive (sigmoidal) function of \(m\), while the rate of CDK1 inactivation by Wee1 is a decreasing sigmoidal function. The net equation:

\[ \frac{dm}{dt} = \frac{V_{Cdc25}\,m^n}{K_{Cdc25}^n + m^n} \cdot (1 - m) - \frac{V_{Wee1}}{1 + (m/K_{Wee1})^n} \cdot m, \]

where the first term is Cdc25-mediated activation of inactive CDK1 (fraction \(1-m\)) and the second term is Wee1-mediated inactivation of active CDK1 (fraction \(m\)).

Example — Bistability in the MPF switch: worked numerical example: Use parameters \(V_{Cdc25} = 1\,\text{min}^{-1}\), \(K_{Cdc25} = 0.5\), \(V_{Wee1} = 1\,\text{min}^{-1}\), \(K_{Wee1} = 0.5\), and Hill coefficient \(n = 4\). We look for fixed points of \[ \dot{m} = \frac{m^4}{0.0625 + m^4}(1-m) - \frac{1}{1+(m/0.5)^4}\cdot m. \] Setting \(\dot{m} = 0\) and dividing by \(m > 0\): \[ \frac{m^4}{0.0625 + m^4}(1-m)/m = \frac{m}{1+(2m)^4}. \]

Numerically, this equation has three solutions (for these parameters):

  • \(m_1^* \approx 0.03\) (stable — the G1 “off” state, CDK1 mostly inactive)
  • \(m_2^* \approx 0.30\) (unstable saddle — the separatrix threshold)
  • \(m_3^* \approx 0.97\) (stable — the M “on” state, CDK1 mostly active)

Verification of stability at \(m_1^* = 0.03\): The Jacobian \(dm'/dm\) requires differentiating the right-hand side. The Cdc25 term contribution near \(m = 0.03\) is small (since \(m^4 \ll K_{Cdc25}^4 = 0.0625\)), so the Wee1 inactivation dominates and pulls \(m\) back toward 0 if perturbed upward toward the saddle. This confirms stability of the low state.

Bistability mechanism: G1 cells (low CDK1 activity) are maintained at \(m_1^* \approx 0.03\). For mitotic entry, a triggering event (e.g., cyclin B accumulation or CDK-activating kinase) must push \(m\) above the unstable threshold \(m_2^* \approx 0.30\). Once past this separatrix, the positive feedback from Cdc25 (and relief of Wee1 inhibition) drives \(m\) rapidly to the M-phase state \(m_3^* \approx 0.97\). The hysteresis means that a cell in M phase will not revert to G1 even if the triggering signal is reduced below the original threshold — it remains in M phase until the APC/C degrades cyclin B and the system undergoes a reverse saddle-node bifurcation back to the G1 state.

Two stable steady states correspond to two cell cycle phases: \(m \approx 0.03\) is the G1 phase (interphase), and \(m \approx 0.97\) is the M phase (mitosis). The cell cycle can be understood as a trajectory that visits these two attractors in sequence, driven by the accumulation and degradation of cyclin B.

Remark (Experimental evidence for cell cycle bistability): The bistability of the MPF switch was demonstrated experimentally by Sha et al. (2003) using cell-free Xenopus egg extracts. By titrating in cyclin B at different concentrations and observing CDK1 activity, they found a sigmoidal hysteresis curve: at low cyclin B concentrations, CDK1 activity was low (G1-like state); as cyclin B was increased, CDK1 activity jumped discontinuously to a high level at a threshold concentration. Upon subsequently reducing cyclin B, CDK1 activity did not return to low until a different (lower) threshold — confirming the hysteresis that is the hallmark of bistability. This elegant experiment directly validated the Tyson–Novak theoretical prediction made a decade earlier.

Chapter 12: Spatial Models and Turing Instability

12.1 Reaction-Diffusion Systems

The models developed so far assume a well-mixed intracellular environment — all molecules are uniformly distributed in space and interact equally. This is a reasonable approximation for small cells (bacteria) or rapidly diffusing small molecules, but breaks down for larger cells (eukaryotes), tissue-level phenomena, and any situation where spatial gradients are the biological signal (morphogen gradients in development, calcium waves, neural pattern formation).

Reaction-diffusion (RD) systems couple chemical kinetics with molecular diffusion:

Definition (Reaction-Diffusion PDE): For species with concentrations \(u(\mathbf{x}, t)\) and \(v(\mathbf{x}, t)\) in a spatial domain \(\Omega \subseteq \mathbb{R}^d\), a two-component reaction-diffusion system is: \[ \frac{\partial u}{\partial t} = D_u \nabla^2 u + f(u,v), \] \[ \frac{\partial v}{\partial t} = D_v \nabla^2 v + g(u,v), \] where \(D_u, D_v > 0\) are diffusion coefficients and \(f, g\) are the reaction kinetics (the same nonlinearities as in the ODE model). The Laplacian \(\nabla^2 u = \partial^2 u/\partial x^2\) (in 1D) captures Fickian diffusion.

The remarkable discovery of Alan Turing (1952) was that diffusion — naively expected only to smooth out concentration differences — can actually drive pattern formation from a uniform state. A homogeneous steady state that is stable in the ODE (no diffusion) can become unstable when diffusion is included, if the two species diffuse at very different rates. This is diffusion-driven instability or Turing instability.

12.2 Turing’s Diffusion-Driven Instability

Suppose \((u^*, v^*)\) is a stable steady state of the well-mixed (ODE) system. To analyse the RD system, linearise about \((u^*, v^*)\) and write the perturbation as a Fourier mode \(\delta u = U e^{\sigma t + ikx}\), \(\delta v = V e^{\sigma t + ikx}\), where \(k\) is the spatial wavenumber. Substituting into the linearised PDE gives the dispersion relation — the growth rate \(\sigma(k)\) as a function of wavenumber \(k\).

Theorem (Turing Instability Conditions): Let \((u^*, v^*)\) be a stable homogeneous steady state of the ODE system, so that the Jacobian \(J\) satisfies \(\text{tr}(J) < 0\) and \(\det(J) > 0\). The homogeneous steady state of the reaction-diffusion system can be unstable (Turing unstable) if and only if the following conditions hold simultaneously:
  1. The species play asymmetric roles: one is a self-activator (the activator, with \(J_{11} > 0\)) and the other is a self-inhibitor or cross-inhibitor (the inhibitor, with \(J_{22} < 0\)).
  2. The inhibitor diffuses faster than the activator: \(D_v \gg D_u\) (the local activation–long-range inhibition condition).
Under these conditions, there exists a range of wavenumbers \(k \in (k_1, k_2)\) for which \(\sigma(k) > 0\) — the Fourier modes in this band grow exponentially, selecting a preferred spatial wavelength \(\lambda = 2\pi/k_{max}\) where \(k_{max}\) is the wavenumber of maximum growth rate.
\[ \sigma(k) = \frac{1}{2}\left[(J_{11} - D_u k^2) + (J_{22} - D_v k^2) \pm \sqrt{((J_{11} - D_u k^2) - (J_{22} - D_v k^2))^2 + 4J_{12}J_{21}}\right]. \]\[ h(k^2) := D_u D_v k^4 - (D_v J_{11} + D_u J_{22})k^2 + \det(J) < 0. \]\[ (D_v J_{11} + D_u J_{22})^2 > 4 D_u D_v \det(J). \]
Example — Turing instability in the Brusselator with diffusion: The Brusselator with diffusion: \[ \frac{\partial u}{\partial t} = D_u u_{xx} + 1 - (b+1)u + au^2v, \] \[ \frac{\partial v}{\partial t} = D_v v_{xx} + bu - au^2v. \] The homogeneous steady state is \((u^*, v^*) = (1, b/a)\) with Jacobian \[ J = \begin{pmatrix} b-1 & a \\ -b & -a \end{pmatrix}. \] So \(J_{11} = b-1\), \(J_{12} = a\), \(J_{21} = -b\), \(J_{22} = -a\), \(\text{tr}(J) = b-1-a\), \(\det(J) = a\).

For the ODE to be stable, we need \(\text{tr}(J) < 0\), i.e., \(b < a + 1\).

Specific parameters: Set \(a = 2\), \(b = 4\), \(D_u = 1\), \(D_v = 8\). Check: \(\text{tr}(J) = 4 - 1 - 2 = 1 > 0\), so the ODE is actually unstable — the Brusselator is already oscillating for these \(a\), \(b\) values. Let us instead use \(a = 2\), \(b = 2\) (so \(\text{tr}(J) = 2-1-2 = -1 < 0\), stable ODE) and \(D_u = 0.5\), \(D_v = 8\).

Then: \(J_{11} = 1\), \(J_{12} = 2\), \(J_{21} = -2\), \(J_{22} = -2\), \(\det(J) = (1)(-2) - (2)(-2) = -2 + 4 = 2 > 0\). The ODE is stable.

\[ (8 \times 1 + 0.5 \times (-2))^2 = (8 - 1)^2 = 49. \]\[ 4 \times 0.5 \times 8 \times 2 = 32. \]

Since \(49 > 32\), the Turing instability criterion is satisfied.

\[ k_{max}^2 = \frac{D_v J_{11} + D_u J_{22}}{2 D_u D_v} = \frac{8(1) + 0.5(-2)}{2 \times 0.5 \times 8} = \frac{8 - 1}{8} = \frac{7}{8} = 0.875. \]\[ k_{max} = \sqrt{0.875} \approx 0.935. \]

The preferred spatial wavelength is \(\lambda = 2\pi/k_{max} \approx 2\pi/0.935 \approx 6.72\) (in units of the diffusion length scale \(\sqrt{D_u/\text{rate}}\)). Spatial patterns with this wavelength will grow fastest and emerge preferentially in numerical simulations.

\[ \sigma(k_{max}) = \frac{1}{2}\left[\tau_k - \sqrt{\tau_k^2 - 4h(k_{max}^2)}\right], \]

where \(\tau_k = (J_{11} - D_u k_{max}^2) + (J_{22} - D_v k_{max}^2) = (1 - 0.5\times 0.875) + (-2 - 8\times 0.875) = (1 - 0.4375) + (-2 - 7) = 0.5625 - 9 = -8.44\), and \(h(k_{max}^2) = D_u D_v k_{max}^4 - (D_v J_{11} + D_u J_{22})k_{max}^2 + \det(J) = 0.5\times 8 \times 0.875^2 - 7 \times 0.875 + 2 = 4 \times 0.766 - 6.125 + 2 = 3.062 - 6.125 + 2 = -1.063\). Since \(h < 0\), we have \(\tau_k^2 - 4h = 71.2 + 4.25 = 75.5\), giving \(\sigma = (-8.44 + \sqrt{75.5})/2 = (-8.44 + 8.69)/2 = 0.13 > 0\). Turing instability is confirmed.

Biological interpretation: In biological systems, Turing patterns are proposed to underlie many spatial structures: the spacing of hair follicles, the stripe patterns on zebrafish, the branching morphogenesis of the lung, and digit patterning in the vertebrate limb. The mathematical requirement — activator diffuses slowly, inhibitor diffuses fast — corresponds to a short-range activating signal (perhaps a membrane-bound protein or a protein with limited diffusivity) paired with a long-range inhibitory signal (a secreted small molecule or rapidly diffusing protein).

Remark (Turing patterns require specific diffusion ratio): A key quantitative prediction of Turing's analysis is that the inhibitor must diffuse substantially faster than the activator — often the ratio \(D_v/D_u\) must exceed 10 or more, depending on the kinetics. This seems hard to achieve for proteins (which diffuse at similar rates in the cytoplasm). In tissues, however, the effective diffusivities of morphogens depend on their binding to extracellular matrix, on active transport, and on cell-mediated degradation. Evidence for Turing-like mechanisms has strengthened with the identification of specific molecular circuits with the required asymmetry in effective diffusivity. The WNT/DKK system (long-range inhibition by Dickkopf, short-range activation by WNT) and the ACTIVIN/NODAL system in early development are candidates. However, definitive proof remains an active area of research, and alternative mechanisms (mechanical instabilities, cell sorting) can produce similar patterns.

Chapter 13: Calcium Oscillations

13.1 The Biology of Calcium Signalling

Calcium ions (Ca\(^{2+}\)) are ubiquitous second messengers: cytosolic Ca\(^{2+}\) concentration rises transiently in response to extracellular signals (hormones, neurotransmitters, growth factors) and triggers diverse cellular responses including muscle contraction, secretion, gene expression, and cell division. The resting cytosolic Ca\(^{2+}\) concentration is approximately \(100\,\text{nM}\), while the concentration in the endoplasmic reticulum (ER) lumen is approximately \(500\,\mu\text{M}\) — a 5000-fold gradient maintained by SERCA pumps. Stimulation causes a brief spike (to \(\sim 1\,\mu\text{M}\) cytosolic Ca\(^{2+}\)), sometimes followed by sustained oscillations with periods ranging from seconds to minutes.

The oscillation mechanism involves calcium-induced calcium release (CICR): Ca\(^{2+}\) released from the ER through IP3 receptors (IP3R) opens neighbouring IP3Rs, amplifying the release. This is a positive feedback loop. The subsequent negative feedbacks — Ca\(^{2+}\)-dependent inactivation of IP3Rs and reuptake by SERCA pumps — allow the system to reset, enabling oscillations.

13.2 The IP3 Receptor and CICR

The IP3 receptor is a ligand-gated Ca\(^{2+}\) channel on the ER membrane. It is activated by the second messenger inositol 1,4,5-trisphosphate (IP3, produced by phospholipase C in response to GPCR or receptor tyrosine kinase signalling) and by Ca\(^{2+}\) itself at low concentrations. At high Ca\(^{2+}\) concentrations, the receptor is inactivated — the bell-shaped dependence of IP3R open probability on cytosolic Ca\(^{2+}\) is the key biophysical feature enabling CICR-driven oscillations.

Definition (CICR — Calcium-Induced Calcium Release): Calcium-induced calcium release is the positive feedback process by which cytosolic Ca\(^{2+}\) directly opens IP3 receptors (and ryanodine receptors), causing further Ca\(^{2+}\) release from the ER. CICR is a key amplification mechanism in Ca\(^{2+}\) signalling. The combination of CICR (positive feedback at low Ca\(^{2+}\)) with Ca\(^{2+}\)-dependent IP3R inactivation (negative feedback at high Ca\(^{2+}\)) creates the conditions for sustained oscillations.

13.3 The Li–Rinzel Model

The Li–Rinzel model (Li and Rinzel, 1994) is a minimal two-variable reduction of the De Young–Keizer model of IP3R dynamics. The two variables are:

  • \(c(t)\): cytosolic Ca\(^{2+}\) concentration (in \(\mu\)M)
  • \(h(t)\): fraction of IP3 receptors not inactivated by Ca\(^{2+}\)

The equations are:

\[ \frac{dc}{dt} = J_{IP3R}(c, h) - J_{pump}(c) + J_{leak}, \]\[ \frac{dh}{dt} = \frac{h_\infty(c) - h}{\tau_h(c)}, \]\[ J_{IP3R} = v_1 m_\infty^3(p)\, n_\infty^3(c)\, h^3\, (c_{ER} - c), \]\[ J_{pump} = \frac{v_3 c^2}{k_3^2 + c^2}, \]\[ J_{leak} = v_2(c_{ER} - c), \]\[ m_\infty(p) = \frac{p}{p + d_1}, \quad n_\infty(c) = \frac{c}{c + d_5}, \quad h_\infty(c) = \frac{d_2}{c + d_2}, \]

and inactivation time constant \(\tau_h(c) = 1/(a_2(c + d_2))\). Here \(p\) is the IP3 concentration (treated as a fixed parameter), \(c_{ER}\) is the (approximately constant) ER Ca\(^{2+}\) concentration, and \(v_1, v_2, v_3, d_1, d_2, d_5, k_3, a_2\) are kinetic parameters.

Example — Li–Rinzel model: oscillation mechanism and parameter example: Using standard Li–Rinzel parameters: \(v_1 = 7.3\,\mu\text{M/s}\), \(v_2 = 0.11\,\mu\text{M/s}\), \(v_3 = 0.9\,\mu\text{M/s}\), \(d_1 = 0.13\,\mu\text{M}\), \(d_2 = 1.049\,\mu\text{M}\), \(d_5 = 0.08234\,\mu\text{M}\), \(k_3 = 0.1\,\mu\text{M}\), \(a_2 = 0.2\,\mu\text{M}^{-1}\text{s}^{-1}\), \(c_{ER} = 2.0\,\mu\text{M}\), and IP3 concentration \(p = 0.5\,\mu\text{M}\). \[ m_\infty(0.5) = \frac{0.5}{0.5 + 0.13} \approx 0.794, \quad n_\infty(0.1) = \frac{0.1}{0.1 + 0.082} \approx 0.549, \]\[ h_\infty(0.1) = \frac{1.049}{0.1 + 1.049} \approx 0.913. \]

At rest, IP3R flux: \(J_{IP3R} \approx 7.3 \times (0.794)^3 \times (0.549)^3 \times (0.913)^3 \times (2.0 - 0.1)\). Computing: \((0.794)^3 \approx 0.501\), \((0.549)^3 \approx 0.166\), \((0.913)^3 \approx 0.761\), so \(J_{IP3R} \approx 7.3 \times 0.501 \times 0.166 \times 0.761 \times 1.9 \approx 7.3 \times 0.120 \approx 0.878\,\mu\text{M/s}\).

SERCA pump flux at rest: \(J_{pump} = 0.9 \times (0.1)^2/(0.01 + (0.1)^2) = 0.9 \times 0.01/0.02 = 0.45\,\mu\text{M/s}\).

Leak flux: \(J_{leak} = 0.11 \times (2.0 - 0.1) = 0.209\,\mu\text{M/s}\).

Net \(dc/dt \approx 0.878 - 0.45 + 0.209 = 0.637\,\mu\text{M/s} > 0\) — the resting state is not a steady state at these parameters; \(c\) will rise, triggering a Ca\(^{2+}\) spike.

Oscillation mechanism (qualitative): As \(c\) rises, the positive CICR feedback (increasing \(n_\infty(c)\)) amplifies Ca\(^{2+}\) release. Simultaneously, the slow variable \(h\) decreases (IP3Rs inactivate at high \(c\)). As \(h\) falls, IP3R flux decreases, the SERCA pump dominates, and \(c\) falls back to baseline. At low \(c\), \(h\) recovers (IP3Rs de-inactivate) and the cycle repeats. This is a relaxation oscillation driven by the interplay of fast Ca\(^{2+}\) dynamics and slow IP3R gating.

Period estimation: For IP3 concentrations \(p\) in the physiological range \(0.3\text{–}0.7\,\mu\text{M}\), the model generates oscillations with periods in the range 5–60 seconds, consistent with experimentally observed Ca\(^{2+}\) oscillations in hepatocytes, oocytes, and T lymphocytes.

Bifurcation structure: As IP3 concentration \(p\) increases, the model undergoes a Hopf bifurcation: at low \(p\), there is a stable steady state (no oscillations); at a critical \(p_c\), a stable limit cycle is born via supercritical Hopf bifurcation and oscillations appear. At very high \(p\), the limit cycle disappears through another Hopf bifurcation and \(c\) settles at a high steady state (saturated Ca\(^{2+}\)). This behaviour — oscillations occurring only in an intermediate range of stimulus — is called a “window of oscillation” and is a characteristic feature of Ca\(^{2+}\) signalling models.

Remark (Frequency encoding in calcium oscillations): A key biological property of Ca\(^{2+}\) oscillations is that the oscillation frequency encodes signal intensity. Stronger receptor stimulation (more IP3 produced, higher \(p\)) generates more frequent Ca\(^{2+}\) spikes, not larger amplitude spikes. Downstream Ca\(^{2+}\)-sensitive enzymes (calmodulin, calcineurin, CaM kinase II) act as frequency decoders: they integrate Ca\(^{2+}\) spikes over time, so higher frequency gives stronger time-averaged activation. This frequency encoding strategy preserves signal fidelity over a wide dynamic range of stimulus intensity and is analogous to the pulse-frequency modulation used in electronic signal processing. The Li–Rinzel model correctly predicts the frequency–IP3 relationship observed experimentally.

Chapter 14: Metabolic Flux Analysis — Stoichiometry, Null Space, and Flux Modes

14.1 The Stoichiometric Framework Revisited

In Chapter 1 we introduced the stoichiometric matrix \(S\) as a compact representation of a reaction network. Here we extend this framework to metabolic flux analysis (MFA), a set of mathematical tools for understanding the distribution of reaction fluxes in a metabolic network at steady state. MFA is the quantitative foundation of metabolic engineering — the rational redesign of cellular metabolism for industrial biotechnology — and of the analysis of disease-associated metabolic reprogramming.

The steady-state constraint \(S\mathbf{v} = \mathbf{0}\) defines a linear subspace of flux space, the null space (or kernel) of \(S\). The dimension of this null space, \(\dim(\ker(S)) = n - \text{rank}(S)\), equals the number of degrees of freedom in the steady-state flux distribution — that is, the number of free flux variables that must be specified (by experiment or optimisation) to determine the complete flux distribution.

Definition (Degrees of Freedom in a Metabolic Network): For a metabolic network with \(m\) internal metabolites and \(n\) reactions, and assuming all \(m\) metabolites are at steady state:
  • The stoichiometric rank is \(\rho = \text{rank}(S)\).
  • The number of independent constraints on steady-state fluxes is \(\rho\).
  • The degrees of freedom (number of free fluxes) is \(n - \rho\).
If the network has \(n_{ext}\) exchange fluxes (fluxes with the environment), the internal degrees of freedom are \(n - n_{ext} - \rho_{int}\), where \(\rho_{int}\) is the rank of the submatrix of \(S\) corresponding to internal reactions. Experimentally measuring \(n - \rho\) independent fluxes fully determines the remaining fluxes.

14.2 Null Space and Flux Cone

The set of all feasible steady-state flux vectors forms a flux cone: it is the intersection of the null space of \(S\) (the steady-state constraint) with the positive orthant (irreversibility constraints \(v_i \geq 0\) for irreversible reactions). The flux cone is a polyhedral cone, and its geometric structure encodes all possible metabolic behaviours.

The extreme rays of the flux cone — the generating vectors from which all feasible flux distributions can be composed as non-negative linear combinations — are the elementary flux modes (EFMs). Each EFM represents a minimal set of reactions that can carry a steady-state flux while satisfying stoichiometric and irreversibility constraints.

Definition (Elementary Flux Mode): An elementary flux mode is a flux vector \(\mathbf{e} \in \ker(S)\), \(\mathbf{e} \geq 0\) (for irreversible reactions), such that there is no other feasible flux vector \(\mathbf{e}'\) with a strictly smaller support (i.e., no sub-network of \(\mathbf{e}\) is itself a feasible steady-state flux mode). Equivalently, EFMs are the extreme rays of the flux cone. Any steady-state flux distribution \(\mathbf{v}\) can be decomposed as a non-negative linear combination of EFMs: \(\mathbf{v} = \sum_i \lambda_i \mathbf{e}_i\) with \(\lambda_i \geq 0\).

EFMs have a direct biological interpretation: each EFM corresponds to a distinct metabolic pathway through the network, carrying flux from specific inputs to specific outputs without violating mass balance at any internal metabolite.

14.3 Worked Example: A Four-Reaction Network

Example — Null space, degrees of freedom, and elementary flux modes for a 4-reaction network: Consider a network with 3 internal metabolites (\(A\), \(B\), \(C\)) and 4 reactions: \[ r_1: \emptyset \to A, \quad r_2: A \to B, \quad r_3: A \to C, \quad r_4: B + C \to \emptyset. \]\[ S = \begin{pmatrix} 1 & -1 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \end{pmatrix}. \]

Rank computation: Row reduce \(S\): Row 1: \((1, -1, -1, 0)\); Row 2: \((0, 1, 0, -1)\); Row 3: \((0, 0, 1, -1)\). This is already in row echelon form with 3 non-zero rows, so \(\text{rank}(S) = 3\).

Degrees of freedom: \(n - \rho = 4 - 3 = 1\). There is one free flux parameter — specifying any single flux (e.g., the uptake flux \(v_1\)) determines all other fluxes at steady state.

\[ v_1 - v_2 - v_3 = 0, \quad v_2 - v_4 = 0, \quad v_3 - v_4 = 0. \]

From rows 2 and 3: \(v_2 = v_4\) and \(v_3 = v_4\). From row 1: \(v_1 = v_2 + v_3 = 2v_4\). Setting \(v_4 = 1\) as the free parameter, the null space basis vector is \(\mathbf{e} = (2, 1, 1, 1)^T\).

Elementary flux modes: Since \(\dim(\ker S) = 1\), there is exactly one EFM (up to scaling): \(\mathbf{e} = (2, 1, 1, 1)^T\). Every feasible steady-state flux distribution is a positive multiple of this: \(\mathbf{v} = \lambda(2, 1, 1, 1)^T\) for \(\lambda \geq 0\). The unique EFM says: for every 2 units of \(A\) produced, 1 unit passes through \(r_2\) to \(B\), 1 unit passes through \(r_3\) to \(C\), and then 1 unit of the combined \(B + C\) pair is consumed by \(r_4\).

Biological interpretation: This network has a balanced branch point at \(A\): half the flux goes through \(r_2\) and half through \(r_3\), and the two branches must recombine at \(r_4\) stoichiometrically. There is no flexibility in the flux split — the stoichiometry of \(r_4\) (consuming one \(B\) and one \(C\) simultaneously) forces equal fluxes through \(r_2\) and \(r_3\). If a second reaction were added (e.g., \(B \to \emptyset\) independently), the null space dimension would increase to 2 and two EFMs would exist: one using \(r_4\) and one bypassing it.

\[ v_1 = 4, \quad v_2 = 2, \quad v_3 = 2, \quad v_4 = 2\,\mu\text{M/min}. \]

Verify steady state: \(\dot{A} = v_1 - v_2 - v_3 = 4 - 2 - 2 = 0\). \(\dot{B} = v_2 - v_4 = 2 - 2 = 0\). \(\dot{C} = v_3 - v_4 = 2 - 2 = 0\). All metabolites are at steady state.

14.4 Connection to Flux Balance Analysis

Elementary flux modes provide the geometric foundation for flux balance analysis (FBA, Chapter 4). In FBA, one maximises a linear objective function \(c^T \mathbf{v}\) over the flux cone. By the theory of linear programming, the optimal solution always lies at a vertex of the feasible polytope (the intersection of the flux cone with capacity constraints \(v_i \leq v_i^{max}\)). Each vertex of this polytope is a non-negative linear combination of at most \(\rho\) EFMs — specifically, it is a basic feasible solution of the LP, corresponding to a minimal set of active reactions.

Remark (Number of EFMs grows combinatorially): For small networks (fewer than 20 reactions), enumeration of all EFMs is tractable and provides a complete picture of network capabilities. For genome-scale networks (\(n \approx 1000\) reactions), the number of EFMs can reach \(10^{15}\) or more — far too many to enumerate explicitly. In this regime, FBA (optimisation over the flux cone) and sampling methods (e.g., Monte Carlo sampling of the polytope) replace explicit EFM enumeration. Nevertheless, the EFM concept remains conceptually central: it decomposes network function into irreducible metabolic pathways, provides a basis for comparing network structures across organisms, and identifies which combinations of reactions are essential for a given metabolic function.
Example — Degrees of freedom in the central metabolic network: The central metabolic network of E. coli (glycolysis + pentose phosphate pathway + TCA cycle + anaplerotic reactions) involves approximately 72 reactions and 54 internal metabolites. The stoichiometric rank is approximately 54 (assuming all internal metabolites are at steady state), giving degrees of freedom \(= 72 - 54 = 18\). This means that 18 independent flux measurements (e.g., by \(^{13}\)C isotope labelling and mass spectrometry) are required to fully characterise the steady-state flux distribution. In practice, several additional constraints (known reaction reversibilities, measured exchange fluxes for glucose uptake and CO\(_2\) production, measured growth rate) reduce the effective degrees of freedom to 3–5, making the flux distribution experimentally determinable. This calculation — null space dimension = number of measurements needed — is the starting point for designing \(^{13}\)C metabolic flux analysis experiments in biotechnology and clinical metabolomics.

Appendix: Computational Tools

Python / Google Colab

Numerical integration of ODE systems uses scipy.integrate.solve_ivp (or the older odeint). Phase portraits are generated by plotting nullclines (solving \(f = 0\) and \(g = 0\) using numpy) and overlaying vector fields with matplotlib.pyplot.quiver. Stochastic simulations use custom Gillespie algorithm implementations or the gillespy2 library.

MATLAB

MATLAB’s ode45 (explicit Runge–Kutta 4/5 adaptive step) is the workhorse solver for non-stiff systems; ode15s is preferred for stiff systems (e.g., fast-slow systems or systems with widely varying time scales). The pplane package provides an interactive phase-plane graphical interface.

XPPAUT

XPPAUT (X-Windows Phase Plane plus AUTO) is the standard tool for bifurcation analysis in mathematical biology. AUTO, embedded in XPPAUT, performs numerical continuation: it follows a branch of steady states or limit cycles as a parameter varies, detecting bifurcation points automatically. XPPAUT .ode files specify the model symbolically, and the software handles numerical continuation, stability computation, and phase portrait generation interactively.


Appendix B: Summary of Key Formulas

Stoichiometric Matrix Framework

For a network with \(m\) species and \(n\) reactions, the ODE system is \(\dot{\mathbf{X}} = S\mathbf{v}(\mathbf{X})\) where \(S\) is the \(m \times n\) stoichiometric matrix. The key structural quantities are:

  • Null space \(\ker(S)\): all flux vectors \(\mathbf{v}\) with \(S\mathbf{v} = \mathbf{0}\). At steady state, \(\mathbf{v}^*\) lies here.
  • Left null space \(\ker(S^T)\): all conservation law vectors \(\mathbf{c}\) with \(\mathbf{c}^T S = \mathbf{0}\). Dimension = \(m - \text{rank}(S)\).
  • Column space \(\text{im}(S)\): all achievable rates of change \(d\mathbf{X}/dt\).
  • Row space: the space of stoichiometrically distinct combinations of species.

Enzyme Kinetics Summary

ModelRate equationKey parameters
Michaelis-Menten\(v = V_{max}[S]/(K_M + [S])\)\(K_M, V_{max}\)
Competitive inhibition\(v = V_{max}[S]/(K_M(1+[I]/K_I) + [S])\)\(K_I\) increases app. \(K_M\)
Non-competitive inhibition\(v = V_{max}[S]/((K_M+[S])(1+[I]/K_I))\)\(K_I\) decreases app. \(V_{max}\)
Hill equation\(v = V_{max}[S]^n/(K_{half}^n + [S]^n)\)\(n\): cooperativity; sigmoidal for \(n>1\)

Stability Classification at a 2D Fixed Point

Given Jacobian \(J\) with \(\tau = \text{tr}(J)\) and \(\Delta = \det(J)\):

\[ \begin{array}{ll} \Delta < 0: & \text{saddle (unstable)} \\ \Delta > 0,\; \tau < 0,\; \tau^2 > 4\Delta: & \text{stable node} \\ \Delta > 0,\; \tau < 0,\; \tau^2 < 4\Delta: & \text{stable spiral} \\ \Delta > 0,\; \tau > 0,\; \tau^2 > 4\Delta: & \text{unstable node} \\ \Delta > 0,\; \tau > 0,\; \tau^2 < 4\Delta: & \text{unstable spiral} \\ \Delta > 0,\; \tau = 0: & \text{centre (linear analysis; need nonlinear correction)} \end{array} \]

Metabolic Control Analysis

  • Summation theorem: \(\sum_i C^J_{E_i} = 1\) (flux control coefficients sum to 1).
  • Connectivity theorem: \(\sum_i C^J_{E_i}\,\varepsilon^{v_i}_S = 0\) for any intermediate metabolite \(S\).
  • Response coefficient: \(R^J_p = C^J_E \cdot \varepsilon^v_p\) (how flux responds to parameter \(p\) via enzyme \(E\)).

Goldbeter-Koshland Function

\[ GK(u,v,J,K) = \frac{2uJ}{v-u+vJ+uK+\sqrt{(v-u+vJ+uK)^2 - 4(v-u)uJ}}. \]

When \(J, K \to 0\) (enzyme saturation), \(GK \to 0\) for \(u < v\) and \(GK \to 1\) for \(u > v\) — a perfect switch.

Back to top