ECE 207: Signals and Systems

Oleg Michailovich

Estimated study time: 1 hr 47 min

Table of contents

Sources

  • A. V. Oppenheim and A. S. Willsky, Signals and Systems, 2nd ed. (Prentice Hall, 1997). The canonical graduate-level reference; Chapters 1–11 span essentially this entire course.
  • B. P. Lathi and R. Green, Linear Systems and Signals, 3rd ed. (Oxford University Press, 2018). The primary problem-set reference for many ECE 207 offerings; highly computational, excellent examples.
  • S. Haykin and B. Van Veen, Signals and Systems, 2nd ed. (Wiley, 2003). Clear prose; strong on physical interpretation.
  • MIT OpenCourseWare 6.003: Signals and Systems (open access). Full lecture notes, problem sets, and exams freely available.
  • Stanford EE102A lecture notes, B. Hajek (open access). Particularly strong treatment of Laplace theory and ROC analysis.

Chapter 1: Introduction to Signals and Systems

1.1 What is a Signal?

A signal is a function of one or more independent variables that conveys information about the state or behaviour of a physical phenomenon. In most engineering contexts the independent variable is time, though signals can also depend on spatial coordinates, frequency, or abstract index sets. ECE 207 focuses primarily on signals of a single real variable — time — and develops separate but parallel theories for the cases where that variable is continuous and where it is discrete.

Formally, a continuous-time (CT) signal is a mapping \( x : \mathbb{R} \to \mathbb{C} \) (or \(\mathbb{R}\)), meaning a value is defined for every real instant \(t\). A discrete-time (DT) signal is a mapping \( x : \mathbb{Z} \to \mathbb{C} \), meaning values are defined only at integer indices \(n\). We write \(x(t)\) for CT and \(x[n]\) for DT, using parentheses and square brackets respectively to signal the distinction.

1.2 Classification of Signals

1.2.1 Even and Odd Decomposition

A CT signal \(x(t)\) is even if \(x(-t) = x(t)\) for all \(t\), and odd if \(x(-t) = -x(t)\) for all \(t\).

Every signal can be uniquely decomposed as \(x(t) = x_e(t) + x_o(t)\), where

\[ x_e(t) = \frac{x(t) + x(-t)}{2}, \qquad x_o(t) = \frac{x(t) - x(-t)}{2}. \]

This decomposition is orthogonal in the sense that \(\int_{-\infty}^{\infty} x_e(t)\,x_o(t)\,dt = 0\) whenever the individual integrals converge. The same definitions hold in discrete time with \(n\) replacing \(t\).

1.2.2 Periodic Signals

A CT signal \(x(t)\) is periodic with period \(T > 0\) if \(x(t+T) = x(t)\) for all \(t \in \mathbb{R}\). The fundamental period \(T_0\) is the smallest such positive \(T\). The fundamental frequency is \(f_0 = 1/T_0\) in hertz, or \(\omega_0 = 2\pi/T_0\) in radians per second.

A DT signal \(x[n]\) is periodic with period \(N \in \mathbb{Z}_{>0}\) if \(x[n+N] = x[n]\) for all \(n \in \mathbb{Z}\). An important subtlety: the sinusoidal signal \(x[n] = \cos(\omega_0 n)\) is periodic in discrete time only if \(\omega_0 / (2\pi)\) is a rational number.

For continuous-time complex exponentials \(x_k(t) = e^{jk\omega_0 t}\), \(k \in \mathbb{Z}\), each function has period \(T_k = T_0 / |k|\) (so it is also periodic with period \(T_0\)). These harmonically related exponentials are the building blocks of Fourier series.

The sum of two CT periodic signals with periods \(T_1\) and \(T_2\) is periodic if and only if \(T_1/T_2\) is rational, in which case the composite period is \(\text{lcm}(T_1, T_2)\) (the least common multiple in the rational-ratio sense).

1.2.3 Energy and Power Signals

The instantaneous power dissipated in a unit resistor by a signal \(x(t)\) is \(|x(t)|^2\). This motivates two classifications.

The total energy of a CT signal \(x(t)\) is \[ E = \int_{-\infty}^{\infty} |x(t)|^2 \, dt. \]

The average power is

\[ P = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^{T} |x(t)|^2 \, dt. \]

A signal is an energy signal if \(0 < E < \infty\) (and necessarily \(P = 0\)). It is a power signal if \(0 < P < \infty\) (and necessarily \(E = \infty\)).

Aperiodic signals that decay to zero as \(|t| \to \infty\) are typically energy signals. Periodic signals and signals like \(\cos(\omega_0 t)\) that persist forever are power signals. The constant signal \(x(t) = 1\) has \(P = 1\) and \(E = \infty\), so it is a power signal. The decaying exponential \(x(t) = e^{-at}u(t)\) for \(a > 0\) has \(E = 1/(2a)\), so it is an energy signal.

In discrete time, the energy is \(E = \sum_{n=-\infty}^{\infty} |x[n]|^2\) and average power is \(P = \lim_{N\to\infty} \frac{1}{2N+1}\sum_{n=-N}^{N}|x[n]|^2\).

1.2.4 Transformations of the Independent Variable

Three fundamental transformations act on the independent variable:

  • Time shift: \(y(t) = x(t - t_0)\). Positive \(t_0\) delays the signal; negative \(t_0\) advances it.
  • Time reversal: \(y(t) = x(-t)\). Reflects the signal about \(t = 0\).
  • Time scaling: \(y(t) = x(at)\). For \(|a| > 1\) the signal is compressed in time (stretched in frequency); for \(|a| < 1\) it is expanded in time.

These can be combined. The general linear transformation \(y(t) = x(at + b)\) represents a scaling by \(a\) followed by a shift of \(-b/a\). When performing such manipulations, it is safest to factor as \(x(a(t + b/a))\) and apply scaling first, then shifting.

1.3 Elementary Signals

1.3.1 The Unit Step Function

The unit step function \(u(t)\) is defined by \[ u(t) = \begin{cases} 1 & t \geq 0 \\ 0 & t < 0 \end{cases}. \]

The step function serves as a convenient switch: multiplying any signal \(x(t)\) by \(u(t)\) zeros out the signal for \(t < 0\), “turning it on” at \(t = 0\). The delayed step \(u(t - t_0)\) turns the signal on at \(t = t_0\).

In discrete time, the unit step sequence \(u[n] = 1\) for \(n \geq 0\), \(u[n] = 0\) for \(n < 0\).

1.3.2 The Unit Impulse Function

The Dirac delta \(\delta(t)\) cannot be defined as an ordinary function; it is a distribution — a continuous linear functional on a suitable space of test functions.

The unit impulse (Dirac delta) \(\delta(t)\) is characterised by the following two properties: \[ \delta(t) = 0 \quad \text{for } t \neq 0, \]\[ \int_{-\infty}^{\infty} \delta(t)\, dt = 1. \]

One rigorous construction: consider a rectangular pulse of height \(1/\varepsilon\) on the interval \([0,\varepsilon)\). As \(\varepsilon \to 0^+\) the pulse converges (in the distributional sense) to \(\delta(t)\). Alternatively, \(\delta(t) = \lim_{\sigma \to 0} \frac{1}{\sigma\sqrt{2\pi}} e^{-t^2/(2\sigma^2)}\).

The most important property is the sifting property:

Sifting Property. For any continuous function \(\phi(t)\) and any \(t_0 \in \mathbb{R}\), \[ \int_{-\infty}^{\infty} \phi(t)\,\delta(t - t_0)\,dt = \phi(t_0). \]

This is essentially the defining property of \(\delta\) from the distributional viewpoint: \(\delta(t - t_0)\) is the evaluation functional that extracts the value of \(\phi\) at \(t_0\).

Additional properties follow from this characterisation:

  • Scaling: \(\delta(at) = \frac{1}{|a|}\delta(t)\) for \(a \neq 0\).
  • Product: \(x(t)\,\delta(t - t_0) = x(t_0)\,\delta(t - t_0)\).
  • Relationship to step: \(u(t) = \int_{-\infty}^{t} \delta(\tau)\,d\tau\), or equivalently \(\delta(t) = \frac{d}{dt}u(t)\) in the distributional sense.

The unit impulse in discrete time \(\delta[n]\) is far simpler — it is just the Kronecker delta:

\[ \delta[n] = \begin{cases} 1 & n = 0 \\ 0 & n \neq 0 \end{cases}. \]

The sifting property in discrete time is \(\sum_{n=-\infty}^{\infty} x[n]\,\delta[n - n_0] = x[n_0]\).

1.3.3 Higher-Order Impulse Functions

The doublet \(\delta'(t) = d\delta/dt\) satisfies

\[ \int_{-\infty}^{\infty} \phi(t)\,\delta'(t - t_0)\,dt = -\phi'(t_0). \]

More generally, the \(k\)-th derivative \(\delta^{(k)}(t)\) satisfies \(\int \phi(t)\,\delta^{(k)}(t)\,dt = (-1)^k \phi^{(k)}(0)\). These higher-order impulses appear when taking Laplace transforms of systems with initial conditions.

1.4 Systems and their Properties

A system is a mapping \(\mathcal{H}\) from an input signal to an output signal: \(y = \mathcal{H}\{x\}\).

1.4.1 Linearity

A system \(\mathcal{H}\) is linear if it satisfies the superposition principle: for all inputs \(x_1, x_2\) and scalars \(\alpha, \beta\), \[ \mathcal{H}\{\alpha x_1 + \beta x_2\} = \alpha\,\mathcal{H}\{x_1\} + \beta\,\mathcal{H}\{x_2\}. \]

This splits into two conditions: additivity (\(\alpha = \beta = 1\)) and homogeneity (\(x_2 = 0\)). Both must hold simultaneously. The system \(y(t) = 2x(t)\) is linear, but \(y(t) = x(t) + 3\) is not (it fails additivity — the constant offset 3 violates superposition).

1.4.2 Time-Invariance

A system \(\mathcal{H}\) is time-invariant if a time shift in the input produces an identical time shift in the output: if \(y(t) = \mathcal{H}\{x(t)\}\), then \[ \mathcal{H}\{x(t - t_0)\} = y(t - t_0) \quad \text{for all } t_0. \]

A system whose rule explicitly depends on \(t\) (e.g., \(y(t) = t\,x(t)\)) is time-varying. A system like \(y(t) = x(t-2)\) (a fixed delay) is time-invariant.

1.4.3 LTI Systems

A system that is both linear and time-invariant is called an LTI system.

LTI systems are the workhorse of signal processing. Their power stems from the fact that they are completely characterised by a single function — the impulse response \(h(t)\) — defined as the output when the input is \(\delta(t)\):

\[ h(t) = \mathcal{H}\{\delta(t)\}. \]

Given \(h(t)\), the output for any input \(x(t)\) is computed via convolution, as developed in Chapter 2.

1.4.4 Causality

A system is causal if the output at any time \(t_0\) depends only on the input at times \(t \leq t_0\). Equivalently, for a causal LTI system, \(h(t) = 0\) for all \(t < 0\).

Physical systems are generally causal because the present cannot depend on the future. Anti-causal and non-causal systems arise in off-line processing (e.g., applying a filter to a stored audio file).

1.4.5 Stability

A system is BIBO stable (bounded-input, bounded-output) if every bounded input produces a bounded output: if \(|x(t)| \leq M_x < \infty\) for all \(t\), then \(|y(t)| \leq M_y < \infty\) for all \(t\).
BIBO Stability Criterion for LTI Systems. An LTI system with impulse response \(h(t)\) is BIBO stable if and only if \[ \int_{-\infty}^{\infty} |h(t)|\,dt < \infty, \]

i.e., if and only if \(h \in L^1(\mathbb{R})\).

(Sufficiency.) Suppose \(\int|h(\tau)|d\tau = M < \infty\) and \(|x(t)| \leq B\) for all \(t\). Then \[ |y(t)| = \left|\int_{-\infty}^{\infty} h(\tau)\,x(t-\tau)\,d\tau\right| \leq \int_{-\infty}^{\infty}|h(\tau)|\,|x(t-\tau)|\,d\tau \leq B\,M. \]

So \(y\) is bounded.

(Necessity.) If \(\int|h(\tau)|d\tau = \infty\), one can construct a bounded input (e.g., \(x(t) = \text{sgn}(h(-t))\)) for which the integral defining \(y(0)\) diverges. Hence the system is not BIBO stable.

1.4.6 Memory and Invertibility

A system is memoryless if the output at time \(t\) depends only on the input at that same instant \(t\). For LTI systems this means \(h(t) = c\,\delta(t)\) for some constant \(c\). A system is invertible if there exists another system \(\mathcal{H}^{-1}\) such that \(\mathcal{H}^{-1}\{\mathcal{H}\{x\}\} = x\) for all inputs.


Chapter 2: Time-Domain Analysis of Continuous-Time LTI Systems

2.1 The Convolution Integral

The central result of LTI system theory in the time domain is that the output is the convolution of the input with the impulse response. The derivation proceeds from first principles using linearity and time-invariance.

Any signal \(x(t)\) can be represented as a superposition of impulses via the sifting property:

\[ x(t) = \int_{-\infty}^{\infty} x(\tau)\,\delta(t - \tau)\,d\tau. \]

Applying the linear system \(\mathcal{H}\) to both sides, and using linearity (the integral is a sum), time-invariance (the response to \(\delta(t-\tau)\) is \(h(t-\tau)\)):

\[ y(t) = \mathcal{H}\{x(t)\} = \int_{-\infty}^{\infty} x(\tau)\,\mathcal{H}\{\delta(t-\tau)\}\,d\tau = \int_{-\infty}^{\infty} x(\tau)\,h(t-\tau)\,d\tau. \]
The convolution of two CT signals \(f\) and \(g\) is \[ (f * g)(t) = \int_{-\infty}^{\infty} f(\tau)\,g(t - \tau)\,d\tau. \]

For an LTI system: \(y(t) = (x * h)(t)\).

2.1.1 Properties of Convolution

Convolution satisfies the following algebraic identities:

1. Commutativity: \(f * g = g * f\).

2. Associativity: \(f * (g * h) = (f * g) * h\).

3. Distributivity over addition: \(f * (g + h) = (f * g) + (f * h)\).

4. Shift: If \(y = f * g\), then \(f(t - t_1) * g(t - t_2) = y(t - t_1 - t_2)\).

5. Identity: \(f * \delta = f\) (the impulse is the identity for convolution).

6. Duration: If \(f\) has finite support \([a_1, b_1]\) and \(g\) has finite support \([a_2, b_2]\), then \(f * g\) has support contained in \([a_1 + a_2,\, b_1 + b_2]\).

Commutativity is verified by the substitution \(\sigma = t - \tau\):

\[ (f * g)(t) = \int_{-\infty}^{\infty} f(\tau)\,g(t-\tau)\,d\tau = \int_{-\infty}^{\infty} f(t-\sigma)\,g(\sigma)\,d\sigma = (g * f)(t). \]

Associativity and distributivity follow from Fubini’s theorem (interchangeability of the order of integration), which applies whenever the signals are sufficiently well-behaved (e.g., both in \(L^1\) or one in \(L^1\) and the other bounded).

2.1.2 Graphical Convolution

Computing convolution graphically requires:

  1. Choose one signal (say \(h\)) and reverse it: form \(h(-\tau)\).
  2. Slide the reversed signal by \(t\): form \(h(t - \tau)\) as a function of \(\tau\).
  3. For each value of \(t\), multiply the two signals pointwise and integrate the area under the product.

The key is tracking which portions of the two signals overlap as \(t\) varies, identifying the breakpoints in \(t\) where the overlap configuration changes. In each configuration the integral is an elementary calculation.

Example: Rectangular pulse convolved with itself. Let \(f(t) = g(t) = \text{rect}(t/\tau_0) = u(t + \tau_0/2) - u(t - \tau_0/2)\), a pulse of height 1 and width \(\tau_0\). The overlap between \(f(\sigma)\) and \(g(t-\sigma)\) grows linearly from \(t = -\tau_0\) to \(t = 0\), then decreases linearly from \(t = 0\) to \(t = \tau_0\), giving a triangular pulse of height \(\tau_0\) and base width \(2\tau_0\): \[ (f * g)(t) = \begin{cases} \tau_0 + t & -\tau_0 \leq t \leq 0 \\ \tau_0 - t & 0 < t \leq \tau_0 \\ 0 & \text{otherwise} \end{cases}. \]

2.2 Systems Described by Linear Differential Equations

The most important class of CT LTI systems encountered in practice are those described by linear ordinary differential equations with constant coefficients:

\[ \sum_{k=0}^{N} a_k \frac{d^k y}{dt^k} = \sum_{k=0}^{M} b_k \frac{d^k x}{dt^k}, \]

where \(a_N \neq 0\) and \(a_0, \ldots, a_N, b_0, \ldots, b_M\) are real constants.

2.2.1 Zero-Input Response

The zero-input response \(y_{zi}(t)\) is the solution to the homogeneous equation (with \(x(t) = 0\)) subject to initial conditions. It arises from energy stored in the system (e.g., charge on capacitors, current through inductors). The homogeneous solution has the form

\[ y_{zi}(t) = \sum_{k=1}^{N} C_k e^{\lambda_k t}, \]

where \(\lambda_1, \ldots, \lambda_N\) are the roots of the characteristic polynomial

\[ p(\lambda) = a_N \lambda^N + a_{N-1}\lambda^{N-1} + \cdots + a_1 \lambda + a_0 = 0, \]

and the constants \(C_k\) are determined by the initial conditions. For repeated roots \(\lambda_k\) of multiplicity \(m\), the corresponding terms are \((C_{k,0} + C_{k,1}t + \cdots + C_{k,m-1}t^{m-1})e^{\lambda_k t}\).

2.2.2 Zero-State Response

The zero-state response \(y_{zs}(t)\) is the response to the input \(x(t)\) with all initial conditions set to zero. For a causal LTI system this is the convolution

\[ y_{zs}(t) = \int_0^{\infty} h(\tau)\,x(t - \tau)\,d\tau = (h * x)(t), \]

where \(h(t)\) is the causal impulse response. The complete response is \(y(t) = y_{zi}(t) + y_{zs}(t)\).

2.2.3 Computing the Impulse Response

For the system \(\sum a_k y^{(k)} = \sum b_k x^{(k)}\), the impulse response \(h(t)\) solves the differential equation with \(x(t) = \delta(t)\) and zero initial conditions for \(t < 0\). For a proper system (\(M < N\)):

\[ h(t) = \left(\sum_{k=1}^{N} A_k e^{\lambda_k t}\right) u(t) \]

where the \(A_k\) are found by matching coefficients (equivalently, by applying the initial conditions induced by the delta input). For the improper case (\(M \geq N\)), additional terms involving \(\delta(t)\) and its derivatives appear in \(h(t)\).

2.3 Stability in the Time Domain

For a system described by the differential equation above, the zero-input response decays to zero as \(t \to \infty\) (the system is asymptotically stable) if and only if all characteristic roots \(\lambda_k\) satisfy \(\text{Re}(\lambda_k) < 0\).

The system is BIBO stable (as established by the \(L^1\) criterion on \(h\)) if and only if all poles of its transfer function lie in the open left half of the complex plane — precisely the condition for asymptotic stability when the system has no pole-zero cancellations.


Chapter 3: Frequency-Domain Analysis of Continuous-Time LTI Systems

3.1 The Laplace Transform

The Laplace transform provides a systematic algebraic method for solving differential equations with initial conditions and for analysing LTI systems in the frequency domain.

The bilateral (two-sided) Laplace transform of a signal \(x(t)\) is \[ X(s) = \mathcal{L}\{x(t)\} = \int_{-\infty}^{\infty} x(t)\,e^{-st}\,dt, \]

where \(s = \sigma + j\omega \in \mathbb{C}\). The transform converges for those values of \(s\) in the region of convergence (ROC).

The unilateral (one-sided) Laplace transform integrates from \(0^-\):

\[ X(s) = \int_{0^-}^{\infty} x(t)\,e^{-st}\,dt. \]

The unilateral transform is the appropriate tool when dealing with initial-condition problems, because it automatically incorporates \(y(0^-), y'(0^-)\), etc., into the transformed equation.

3.1.1 Region of Convergence

The ROC is determined by the decay properties of \(x(t)\). For signals of the form \(e^{at}u(t)\) (right-sided), the ROC is the half-plane \(\text{Re}(s) > a\). For \(-e^{at}u(-t)\) (left-sided), the ROC is \(\text{Re}(s) < a\). For two-sided signals the ROC is a vertical strip \(\sigma_1 < \text{Re}(s) < \sigma_2\), possibly empty.

Key ROC rules:

  • The ROC is always a connected region of the form \(\{\text{Re}(s) > \sigma_-\}\), \(\{\text{Re}(s) < \sigma_+\}\), or a vertical strip.
  • The ROC contains no poles of \(X(s)\).
  • If \(x(t)\) is right-sided and \(X(s)\) converges for some \(s_0\), then it converges for all \(\text{Re}(s) > \text{Re}(s_0)\).

3.1.2 Properties of the Laplace Transform

Let \(X(s) = \mathcal{L}\{x(t)\}\) with ROC \(R\), and \(Y(s) = \mathcal{L}\{y(t)\}\) with ROC \(R'\).

Laplace Transform Properties. \[ \mathcal{L}\{ax(t) + by(t)\} = aX(s) + bY(s), \quad \text{ROC} \supseteq R \cap R'. \]\[ \mathcal{L}\{x(t - t_0)\} = e^{-st_0}X(s), \quad \text{ROC} = R. \]\[ \mathcal{L}\{e^{s_0 t}x(t)\} = X(s - s_0), \quad \text{ROC} = R + \text{Re}(s_0). \]\[ \mathcal{L}\{x(at)\} = \frac{1}{|a|}X\!\left(\frac{s}{a}\right), \quad \text{ROC scaled by } |a|. \]\[ \mathcal{L}\left\{\frac{d^n x}{dt^n}\right\} = s^n X(s) - s^{n-1}x(0^-) - s^{n-2}x'(0^-) - \cdots - x^{(n-1)}(0^-). \]

For the bilateral transform (or with zero initial conditions) this simplifies to \(s^n X(s)\).

\[ \mathcal{L}\left\{\int_{-\infty}^{t} x(\tau)\,d\tau\right\} = \frac{1}{s}X(s). \]\[ \mathcal{L}\{(-t)^n x(t)\} = \frac{d^n X}{ds^n}. \]\[ \mathcal{L}\{x * y\} = X(s)\,Y(s), \quad \text{ROC} \supseteq R \cap R'. \]\[ \lim_{t \to 0^+} x(t) = \lim_{s \to \infty} sX(s). \]\[ \lim_{t \to \infty} x(t) = \lim_{s \to 0} sX(s). \]

3.1.3 Common Laplace Transform Pairs

\(x(t)\)\(X(s)\)ROC
\(\delta(t)\)\(1\)all \(s\)
\(u(t)\)\(1/s\)\(\text{Re}(s) > 0\)
\(t^n u(t)\)\(n!/s^{n+1}\)\(\text{Re}(s) > 0\)
\(e^{-at}u(t)\)\(1/(s+a)\)\(\text{Re}(s) > -a\)
\(t^n e^{-at}u(t)\)\(n!/(s+a)^{n+1}\)\(\text{Re}(s) > -a\)
\(\cos(\omega_0 t)u(t)\)\(s/(s^2+\omega_0^2)\)\(\text{Re}(s) > 0\)
\(\sin(\omega_0 t)u(t)\)\(\omega_0/(s^2+\omega_0^2)\)\(\text{Re}(s) > 0\)
\(e^{-at}\cos(\omega_0 t)u(t)\)\((s+a)/((s+a)^2+\omega_0^2)\)\(\text{Re}(s) > -a\)
\(e^{-at}\sin(\omega_0 t)u(t)\)\(\omega_0/((s+a)^2+\omega_0^2)\)\(\text{Re}(s) > -a\)

3.2 Inverse Laplace Transform via Partial Fractions

Most rational Laplace transforms encountered in practice take the form

\[ X(s) = \frac{B(s)}{A(s)} = \frac{b_M s^M + \cdots + b_1 s + b_0}{a_N s^N + \cdots + a_1 s + a_0}. \]

If \(M \geq N\) (improper), perform polynomial long division first to write \(X(s) = Q(s) + R(s)/A(s)\) with \(\deg R < N\). Then expand \(R(s)/A(s)\) in partial fractions.

3.2.1 Partial Fraction Expansion

Factor \(A(s) = a_N (s - \lambda_1)(s - \lambda_2)\cdots(s - \lambda_N)\). The partial fraction expansion depends on whether the poles are simple or repeated.

Simple poles:

\[ \frac{R(s)}{A(s)} = \sum_{k=1}^{N} \frac{C_k}{s - \lambda_k}. \]

The Heaviside cover-up method gives the residues:

\[ C_k = \lim_{s \to \lambda_k} (s - \lambda_k)\frac{R(s)}{A(s)} = \frac{R(\lambda_k)}{A'(\lambda_k)}. \]

Repeated pole of multiplicity \(m\) at \(\lambda_k\):

\[ \frac{C_{k,1}}{s - \lambda_k} + \frac{C_{k,2}}{(s-\lambda_k)^2} + \cdots + \frac{C_{k,m}}{(s-\lambda_k)^m}, \]

where

\[ C_{k,j} = \frac{1}{(m-j)!}\lim_{s \to \lambda_k} \frac{d^{m-j}}{ds^{m-j}}\left[(s-\lambda_k)^m \frac{R(s)}{A(s)}\right]. \]

Once the partial fraction expansion is found, each term inverts via the table: \(\mathcal{L}^{-1}\{1/(s-\lambda)^k\} = \frac{t^{k-1}}{(k-1)!}e^{\lambda t}u(t)\) for a right-sided inverse.

3.3 The Transfer Function

The transfer function \(H(s)\) of a causal LTI system described by the differential equation \(\sum_{k=0}^N a_k y^{(k)} = \sum_{k=0}^M b_k x^{(k)}\) is the Laplace transform of the impulse response: \[ H(s) = \frac{Y(s)}{X(s)}\bigg|_{\text{zero initial conditions}} = \frac{\sum_{k=0}^{M} b_k s^k}{\sum_{k=0}^{N} a_k s^k}. \]

Taking the Laplace transform of both sides of the differential equation (with zero initial conditions) and applying the differentiation property gives \(A(s)Y(s) = B(s)X(s)\), so \(H(s) = B(s)/A(s)\) directly. The poles of \(H(s)\) are the roots of \(A(s)\); the zeros are the roots of \(B(s)\).

3.3.1 Solving the Complete Response via Laplace

Taking the Laplace transform of the differential equation with non-zero initial conditions:

\[ A(s)Y(s) - \underbrace{P(s)}_{\text{initial conditions}} = B(s)X(s), \]

where \(P(s)\) collects all terms involving \(y(0^-), y'(0^-), \ldots\). Solving:

\[ Y(s) = \underbrace{\frac{P(s)}{A(s)}}_{Y_{zi}(s)} + \underbrace{\frac{B(s)}{A(s)}X(s)}_{Y_{zs}(s)}. \]

The zero-input response \(y_{zi}(t)\) and zero-state response \(y_{zs}(t)\) are obtained by inverting \(Y_{zi}(s)\) and \(Y_{zs}(s) = H(s)X(s)\) separately.

3.4 Interconnection of Systems

Cascaded (series) LTI systems with transfer functions \(H_1(s)\) and \(H_2(s)\) combine as \(H(s) = H_1(s)H_2(s)\). Parallel systems combine as \(H(s) = H_1(s) + H_2(s)\). A unity-feedback closed-loop system gives

\[ H_{\text{cl}}(s) = \frac{H_1(s)}{1 + H_1(s)H_2(s)}, \]

where \(H_1(s)\) is the forward path and \(H_2(s)\) is the feedback path. These identities follow directly from the convolution theorem.

3.5 Frequency Response

For a BIBO-stable LTI system with transfer function \(H(s)\), setting \(s = j\omega\) (i.e., evaluating on the imaginary axis, which lies within the ROC) gives the frequency response:

\[ H(j\omega) = |H(j\omega)|\,e^{j\angle H(j\omega)}. \]

If the input is \(x(t) = e^{j\omega_0 t}\) (a complex sinusoid), the steady-state output of the stable LTI system is

\[ y(t) = H(j\omega_0)\,e^{j\omega_0 t}. \]

For the real-valued input \(x(t) = A\cos(\omega_0 t + \phi)\), the output is

\[ y(t) = A\,|H(j\omega_0)|\,\cos(\omega_0 t + \phi + \angle H(j\omega_0)). \]

This is the sinusoidal steady-state result: the amplitude is scaled by \(|H(j\omega_0)|\) and the phase is shifted by \(\angle H(j\omega_0)\). The frequency response fully characterises how the system processes each frequency component.

Example: First-order lowpass filter. Consider \(H(s) = \frac{1}{s + a}\) with \(a > 0\). Then \(H(j\omega) = \frac{1}{j\omega + a}\), so \[ |H(j\omega)| = \frac{1}{\sqrt{\omega^2 + a^2}}, \qquad \angle H(j\omega) = -\arctan\!\left(\frac{\omega}{a}\right). \]

At \(\omega = 0\), \(|H| = 1/a\). As \(\omega \to \infty\), \(|H| \to 0\). The 3 dB bandwidth (where \(|H| = 1/(a\sqrt{2})\)) occurs at \(\omega = a\).

3.5.1 BIBO Stability and Poles

A causal LTI system is BIBO stable if and only if all poles of \(H(s)\) lie in the open left half-plane \(\{\text{Re}(s) < 0\}\). This is because the partial-fraction inverse of a pole at \(\lambda\) contributes \(e^{\lambda t}u(t)\), which is in \(L^1\) only when \(\text{Re}(\lambda) < 0\).


Chapter 4: Time and Frequency Domain Analysis of Discrete-Time LTI Systems

4.1 Discrete-Time Convolution

The discrete-time counterpart of the convolution integral is the convolution sum. For a DT LTI system with impulse response \(h[n]\):

\[ y[n] = (x * h)[n] = \sum_{k=-\infty}^{\infty} x[k]\,h[n - k]. \]

All the properties — commutativity, associativity, distributivity, shift — carry over from continuous time, with sums replacing integrals and the Kronecker delta replacing the Dirac delta.

For finite-support signals (FIR filters): if \(x[n]\) has \(L_x\) nonzero samples and \(h[n]\) has \(L_h\) nonzero samples, then \(y[n] = (x * h)[n]\) has at most \(L_x + L_h - 1\) nonzero samples.

Example: Moving average filter. The 3-point moving average \(h[n] = \frac{1}{3}(\delta[n] + \delta[n-1] + \delta[n-2])\) computes \[ y[n] = \frac{1}{3}(x[n] + x[n-1] + x[n-2]). \]

Its frequency response is \(H(e^{j\omega}) = \frac{1}{3}(1 + e^{-j\omega} + e^{-j2\omega}) = \frac{1}{3}e^{-j\omega}\frac{\sin(3\omega/2)}{\sin(\omega/2)}\), a lowpass characteristic.

4.2 Linear Difference Equations

DT LTI systems are described by linear constant-coefficient difference equations:

\[ \sum_{k=0}^{N} a_k\,y[n - k] = \sum_{k=0}^{M} b_k\,x[n - k]. \]

Causally, given \(y[n-1], y[n-2], \ldots\) and the input, we can solve for \(y[n]\) recursively:

\[ y[n] = \frac{1}{a_0}\left(\sum_{k=0}^{M} b_k x[n-k] - \sum_{k=1}^{N} a_k y[n-k]\right). \]

4.2.1 Zero-Input and Zero-State Responses

The structure mirrors the CT case. The characteristic roots \(\gamma_1, \ldots, \gamma_N\) satisfy

\[ a_0 \gamma^N + a_1 \gamma^{N-1} + \cdots + a_N = 0. \]

The zero-input response takes the form \(y_{zi}[n] = \sum_{k=1}^N C_k \gamma_k^n\), and the zero-state response is \(y_{zs}[n] = (h * x)[n]\).

For a causal DT LTI system, the impulse response decays geometrically at rate \(|\gamma_k|\). Asymptotic stability requires \(|\gamma_k| < 1\) for all \(k\).

4.3 The Z-Transform

The z-transform is the discrete-time analogue of the Laplace transform.

The bilateral z-transform of a sequence \(x[n]\) is \[ X(z) = \mathcal{Z}\{x[n]\} = \sum_{n=-\infty}^{\infty} x[n]\,z^{-n}, \quad z \in \mathbb{C}. \]

The transform converges for those \(z\) in the region of convergence (ROC), which is always an annulus \(r_1 < |z| < r_2\) (possibly \(r_1 = 0\) or \(r_2 = \infty\)).

The relationship between the z-transform and the DTFT is \(X(e^{j\omega}) = X(z)\big|_{z = e^{j\omega}}\); the DTFT corresponds to evaluating the z-transform on the unit circle \(|z| = 1\), provided the ROC contains the unit circle.

4.3.1 Common Z-Transform Pairs

\(x[n]\)\(X(z)\)ROC
\(\delta[n]\)\(1\)all \(z\)
\(u[n]\)\(z/(z-1)\)\(\lvert z\rvert > 1\)
\(a^n u[n]\)\(z/(z-a)\)\(\lvert z\rvert > \lvert a\rvert\)
\(-a^n u[-n-1]\)\(z/(z-a)\)\(\lvert z\rvert < \lvert a\rvert\)
\(n\,a^n u[n]\)\(az/(z-a)^2\)\(\lvert z\rvert > \lvert a\rvert\)
\(r^n\cos(\omega_0 n)u[n]\)\(z(z-r\cos\omega_0)/(z^2-2r\cos\omega_0 z + r^2)\)\(\lvert z\rvert>r\)
\(r^n\sin(\omega_0 n)u[n]\)\(rz\sin\omega_0/(z^2-2r\cos\omega_0 z + r^2)\)\(\lvert z\rvert>r\)

4.3.2 Properties of the Z-Transform

Z-Transform Properties. \[ \mathcal{Z}\{\alpha x[n] + \beta y[n]\} = \alpha X(z) + \beta Y(z). \]\[ \mathcal{Z}\{x[n - n_0]\} = z^{-n_0} X(z). \]\[ \mathcal{Z}\{x[-n]\} = X(1/z), \quad \text{ROC inverted}. \]\[ \mathcal{Z}\{n\,x[n]\} = -z\frac{dX}{dz}. \]\[ \mathcal{Z}\{a^n x[n]\} = X(z/a), \quad \text{ROC scaled by } |a|. \]\[ \mathcal{Z}\{x * y\} = X(z)\,Y(z). \]

Initial value theorem (causal \(x\)): \(x[0] = \lim_{z \to \infty} X(z)\).

Final value theorem (causal \(x\), stable): \(\lim_{n\to\infty} x[n] = \lim_{z \to 1}(z-1)X(z)\).

4.3.3 Inverse Z-Transform via Partial Fractions

If \(X(z) = B(z)/A(z)\) is rational, expand \(X(z)/z\) (or directly \(X(z)\) in some conventions) in partial fractions:

\[ X(z) = \sum_{k} \frac{C_k z}{z - \gamma_k}, \]

then invert using the table (with the ROC specifying whether the inverse is right-sided or left-sided). For the right-sided inverse, poles inside the ROC contribute causal terms \(C_k \gamma_k^n u[n]\).

4.4 The Discrete-Time Transfer Function and Frequency Response

For a causal DT LTI system described by the difference equation, taking the z-transform (with zero initial conditions):

\[ H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{k=0}^M b_k z^{-k}}{\sum_{k=0}^N a_k z^{-k}} = z^{N-M}\frac{\sum_{k=0}^M b_k z^{M-k}}{\sum_{k=0}^N a_k z^{N-k}}. \]

BIBO stability in discrete time: a causal DT LTI system is BIBO stable if and only if all poles of \(H(z)\) lie strictly inside the unit circle \(|z| < 1\).

The frequency response is obtained by evaluating on the unit circle: \(H(e^{j\omega}) = H(z)|_{z=e^{j\omega}}\), which gives the DTFT of the impulse response \(h[n]\).


Chapter 5: Continuous-Time Periodic Signals — Fourier Series

5.1 Motivation and Statement

Fourier’s fundamental insight is that any “reasonable” periodic signal can be represented as a sum (possibly infinite) of sinusoids at harmonically related frequencies. The mathematical content of this claim involves both the definition of the Fourier series coefficients and conditions under which the series converges to the signal.

5.2 Complex Exponential Fourier Series

Let \(x(t)\) be a CT signal periodic with fundamental period \(T_0\) and fundamental angular frequency \(\omega_0 = 2\pi/T_0\).

The complex exponential Fourier series of \(x(t)\) is \[ x(t) = \sum_{k=-\infty}^{\infty} c_k\,e^{jk\omega_0 t}, \]

where the Fourier coefficients are

\[ c_k = \frac{1}{T_0}\int_{T_0} x(t)\,e^{-jk\omega_0 t}\,dt. \]

The integral is taken over any interval of length \(T_0\).

The formula for \(c_k\) follows from the orthogonality of the complex exponentials:

\[ \frac{1}{T_0}\int_{T_0} e^{jk\omega_0 t}\,e^{-j\ell\omega_0 t}\,dt = \delta[k - \ell]. \]

Multiplying both sides of the series by \(e^{-j\ell\omega_0 t}\) and integrating over one period kills all terms except \(k = \ell\), yielding the coefficient formula.

5.3 Trigonometric Fourier Series

For real \(x(t)\), we have \(c_{-k} = c_k^*\), and the series can be reorganised into a trigonometric form:

\[ x(t) = a_0 + \sum_{k=1}^{\infty}\left[a_k\cos(k\omega_0 t) + b_k\sin(k\omega_0 t)\right], \]

where \(a_0 = c_0\), \(a_k = 2\,\text{Re}(c_k)\), \(b_k = -2\,\text{Im}(c_k)\). Equivalently,

\[ x(t) = a_0 + \sum_{k=1}^{\infty} A_k\cos(k\omega_0 t + \phi_k), \]

with \(A_k = 2|c_k|\) and \(\phi_k = \angle c_k\).

5.4 Dirichlet Conditions

Dirichlet Conditions. If \(x(t)\) is periodic with period \(T_0\) and satisfies: 1. \(x(t)\) is absolutely integrable over one period: \(\int_{T_0}|x(t)|dt < \infty\), 2. \(x(t)\) has at most finitely many maxima and minima in one period, 3. \(x(t)\) has at most finitely many finite discontinuities in one period,

then the Fourier series converges to \(x(t)\) at every point of continuity, and to the average \(\frac{1}{2}[x(t^-) + x(t^+)]\) at every point of discontinuity.

The Dirichlet conditions are satisfied by virtually all signals of engineering interest (piecewise smooth signals). The Gibbs phenomenon — an approximately 9% overshoot near jump discontinuities that does not vanish as more harmonics are included — is a notable feature of Fourier series convergence at discontinuities.

5.5 Properties of Fourier Series

Denote \(x(t) \xleftrightarrow{\text{FS}} c_k\) and \(y(t) \xleftrightarrow{\text{FS}} d_k\), both with period \(T_0\).

Fourier Series Properties.

Linearity: \(\alpha x + \beta y \xleftrightarrow{\text{FS}} \alpha c_k + \beta d_k\).

Time shift: \(x(t - t_0) \xleftrightarrow{\text{FS}} e^{-jk\omega_0 t_0}c_k\). The magnitude spectrum is unchanged; only the phase is shifted.

Time reversal: \(x(-t) \xleftrightarrow{\text{FS}} c_{-k}\). For real even signals, \(c_k\) is real and even.

Time scaling: \(x(at)\) has period \(T_0/a\) and the same coefficients but at rescaled frequencies.

Differentiation: \(\frac{d}{dt}x(t) \xleftrightarrow{\text{FS}} jk\omega_0\,c_k\). Differentiation amplifies high frequencies.

Integration: \(\int_{-\infty}^{t} x(\tau)d\tau \xleftrightarrow{\text{FS}} \frac{c_k}{jk\omega_0}\) for \(k \neq 0\) (and \(c_0 = 0\) required for periodicity).

Multiplication (periodic convolution): \(x(t)y(t) \xleftrightarrow{\text{FS}} \sum_{\ell} c_\ell d_{k-\ell}\).

Conjugate symmetry (real \(x\)): \(c_{-k} = c_k^*\), so \(|c_{-k}| = |c_k|\) and \(\angle c_{-k} = -\angle c_k\).

5.5.1 Parseval’s Theorem for Periodic Signals

Parseval's Theorem (Fourier Series). The average power of a periodic signal equals the sum of squared magnitudes of its Fourier coefficients: \[ P = \frac{1}{T_0}\int_{T_0}|x(t)|^2\,dt = \sum_{k=-\infty}^{\infty}|c_k|^2. \]
\[ \frac{1}{T_0}\int_{T_0}|x(t)|^2\,dt = \frac{1}{T_0}\int_{T_0} x(t)\,\overline{x(t)}\,dt = \frac{1}{T_0}\int_{T_0} \left(\sum_k c_k e^{jk\omega_0 t}\right)\overline{\left(\sum_\ell c_\ell e^{j\ell\omega_0 t}\right)}dt. \]

Expanding and using orthogonality \(\frac{1}{T_0}\int_{T_0}e^{j(k-\ell)\omega_0 t}dt = \delta[k-\ell]\):

\[ = \sum_{k,\ell} c_k \overline{c_\ell}\,\delta[k-\ell] = \sum_k |c_k|^2. \]

Parseval’s theorem has an energy-conservation interpretation: the total power is distributed across harmonics, each contributing \(|c_k|^2\).

5.6 Frequency Spectrum and System Response to Periodic Inputs

The frequency spectrum of a periodic signal consists of a magnitude spectrum \(|c_k|\) vs. \(k\) and a phase spectrum \(\angle c_k\) vs. \(k\), both defined at the discrete frequencies \(k\omega_0\). These are sometimes called line spectra because they are nonzero only at isolated frequencies.

For a stable LTI system with frequency response \(H(j\omega)\), the response to the periodic input

\[ x(t) = \sum_k c_k e^{jk\omega_0 t} \]

is

\[ y(t) = \sum_k c_k H(jk\omega_0)\,e^{jk\omega_0 t}. \]

The Fourier coefficients of the output are \(d_k = c_k H(jk\omega_0)\). The system modifies the magnitude and phase of each harmonic independently.

Example: Square wave through an RC lowpass. The square wave with amplitude 1 and period \(T_0\) has \(c_k = \frac{1}{jk\pi}\) for odd \(k\) and \(c_k = 0\) for even \(k \neq 0\), with \(c_0 = 1/2\). An RC lowpass with \(H(j\omega) = 1/(1 + j\omega RC)\) attenuates each harmonic by \(|H(jk\omega_0)|\), rounding off the corners of the output waveform. For \(RC \gg T_0\), only the DC component passes; for \(RC \ll T_0\), most harmonics pass and the output remains nearly square.

Chapter 6: Continuous-Time Non-Periodic Signals — The Fourier Transform

6.1 From Fourier Series to Fourier Transform

The Fourier transform extends the Fourier series to non-periodic (aperiodic) signals by allowing the period \(T_0 \to \infty\). As the period grows, the fundamental frequency \(\omega_0 = 2\pi/T_0 \to 0\), the harmonic frequencies become dense in \(\mathbb{R}\), and the discrete sum over \(k\) becomes a continuous integral over \(\omega\).

Formally, take a periodic extension of a finite-duration signal and send \(T_0 \to \infty\). Writing \(c_k = \frac{1}{T_0}X(k\omega_0)\) and \(\omega_0 \to d\omega\), the synthesis equation

\[ x(t) = \sum_k c_k e^{jk\omega_0 t} = \sum_k \frac{X(k\omega_0)}{T_0}e^{jk\omega_0 t} \]

becomes the Fourier integral:

\[ x(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} X(\omega)\,e^{j\omega t}\,d\omega. \]
The Fourier transform of a signal \(x(t)\) is \[ X(\omega) = \mathcal{F}\{x(t)\} = \int_{-\infty}^{\infty} x(t)\,e^{-j\omega t}\,dt. \]

The inverse Fourier transform is

\[ x(t) = \mathcal{F}^{-1}\{X(\omega)\} = \frac{1}{2\pi}\int_{-\infty}^{\infty} X(\omega)\,e^{j\omega t}\,d\omega. \]

We write \(x(t) \xleftrightarrow{\mathcal{F}} X(\omega)\).

6.1.1 Conditions for Existence

A sufficient condition for the existence of the Fourier transform is \(x \in L^1(\mathbb{R})\): \(\int|x(t)|dt < \infty\). This ensures \(X(\omega)\) is bounded and uniformly continuous. The Fourier transform also exists in the \(L^2\) sense for square-integrable signals via the Plancherel theorem, and extends to distributions (allowing transforms of \(\delta(t)\), \(u(t)\), periodic signals, etc.).

6.2 Common Fourier Transform Pairs

\(x(t)\)\(X(\omega)\)
\(\delta(t)\)\(1\)
\(1\)\(2\pi\delta(\omega)\)
\(e^{-at}u(t)\), \(a>0\)\(1/(a+j\omega)\)
\(e^{-a\lvert t\rvert}\), \(a>0\)\(2a/(a^2+\omega^2)\)
\(\text{rect}(t/\tau)\)\(\tau\,\text{sinc}(\omega\tau/2)\)
\(\text{sinc}(\omega_c t/\pi) = \frac{\sin(\omega_c t)}{\omega_c t/\pi} \cdot \frac{1}{\pi}\)\(\text{rect}(\omega/(2\omega_c))\)
\(e^{j\omega_0 t}\)\(2\pi\delta(\omega - \omega_0)\)
\(\cos(\omega_0 t)\)\(\pi[\delta(\omega-\omega_0)+\delta(\omega+\omega_0)]\)
\(\sin(\omega_0 t)\)\(\frac{\pi}{j}[\delta(\omega-\omega_0)-\delta(\omega+\omega_0)]\)
\(u(t)\)\(\pi\delta(\omega) + 1/(j\omega)\)
\(\delta'(t)\)\(j\omega\)
\(t^n e^{-at}u(t)\), \(a>0\)\(n!/(a+j\omega)^{n+1}\)
\(e^{-at}\cos(\omega_0 t)u(t)\)\((a+j\omega)/((a+j\omega)^2+\omega_0^2)\)

The key duality pair — \(\text{rect}\) and \(\text{sinc}\) — is fundamental: an ideal lowpass filter in the frequency domain has a sinc impulse response in the time domain.

6.3 Properties of the Fourier Transform

Fourier Transform Properties. \[ \mathcal{F}\{\alpha x(t) + \beta y(t)\} = \alpha X(\omega) + \beta Y(\omega). \]\[ \mathcal{F}\{x(t - t_0)\} = e^{-j\omega t_0}X(\omega). \]

A time shift does not change the magnitude spectrum but introduces a linear phase.

\[ \mathcal{F}\{e^{j\omega_0 t}x(t)\} = X(\omega - \omega_0). \]

Multiplication by a complex exponential shifts the spectrum by \(\omega_0\).

\[ \mathcal{F}\{x(at)\} = \frac{1}{|a|}X\!\left(\frac{\omega}{a}\right). \]

Compression in time produces expansion in frequency, and vice versa: the time-bandwidth product is constant.

\[ \mathcal{F}\{x(-t)\} = X(-\omega) = X^*(\omega) \quad \text{(if } x \text{ is real)}. \]\[ \mathcal{F}\left\{\frac{d^n x}{dt^n}\right\} = (j\omega)^n X(\omega). \]\[ \mathcal{F}\left\{\int_{-\infty}^{t} x(\tau)d\tau\right\} = \frac{1}{j\omega}X(\omega) + \pi X(0)\delta(\omega). \]\[ \text{If } x(t) \xleftrightarrow{\mathcal{F}} X(\omega), \text{ then } X(t) \xleftrightarrow{\mathcal{F}} 2\pi\,x(-\omega). \]

This symmetry means every transform pair generates a second pair by swapping the time and frequency roles.

\[ \mathcal{F}\{x * y\} = X(\omega)\,Y(\omega). \]\[ \mathcal{F}\{x(t)\,y(t)\} = \frac{1}{2\pi}(X * Y)(\omega). \]

Multiplication in one domain corresponds to convolution (scaled by \(1/(2\pi)\)) in the other.

Conjugate symmetry (real \(x\)): \(X(-\omega) = X^*(\omega)\), so \(|X(-\omega)| = |X(\omega)|\) (even magnitude) and \(\angle X(-\omega) = -\angle X(\omega)\) (odd phase).

6.3.1 The Convolution Theorem and its Implications

The convolution theorem \(\mathcal{F}\{x * h\} = X(\omega)H(\omega)\) is the cornerstone of frequency-domain analysis. It says that filtering — which in the time domain is a convolution integral — becomes simple pointwise multiplication in the frequency domain:

\[ Y(\omega) = H(\omega)\,X(\omega). \]

This enables efficient filter design: specify \(H(\omega)\) to pass or attenuate desired frequency bands, then find \(h(t) = \mathcal{F}^{-1}\{H(\omega)\}\) to implement the filter.

6.4 Parseval’s Theorem for Aperiodic Signals

Parseval's Theorem (Fourier Transform). The total energy of a signal \(x(t)\) satisfies: \[ E = \int_{-\infty}^{\infty}|x(t)|^2\,dt = \frac{1}{2\pi}\int_{-\infty}^{\infty}|X(\omega)|^2\,d\omega. \]

The function \(\mathcal{S}(\omega) = |X(\omega)|^2/(2\pi)\) is the energy spectral density: the energy per unit bandwidth at frequency \(\omega\). Parseval’s theorem states that total energy is preserved whether computed in the time or frequency domain.

\[ \int_{-\infty}^{\infty}|x(t)|^2\,dt = \int_{-\infty}^{\infty} x(t)\overline{x(t)}\,dt = \int_{-\infty}^{\infty} x(t)\left(\frac{1}{2\pi}\int_{-\infty}^{\infty}\overline{X(\omega)}\,e^{-j\omega t}\,d\omega\right)dt. \]

Switching the order of integration (justified by \(x, X \in L^2\)):

\[ = \frac{1}{2\pi}\int_{-\infty}^{\infty}\overline{X(\omega)}\left(\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\right)d\omega = \frac{1}{2\pi}\int_{-\infty}^{\infty}\overline{X(\omega)}\,X(\omega)\,d\omega = \frac{1}{2\pi}\int_{-\infty}^{\infty}|X(\omega)|^2\,d\omega. \]

6.5 Ideal Filters and Distortionless Systems

6.5.1 Distortionless Transmission

A system transmits a signal without distortion if the output is a scaled and possibly delayed version of the input: \(y(t) = K\,x(t - t_d)\). In the frequency domain this requires

\[ H(\omega) = K\,e^{-j\omega t_d}, \]

which means constant magnitude \(|H(\omega)| = K\) and linear phase \(\angle H(\omega) = -\omega t_d\). Deviations from constant magnitude cause amplitude distortion; deviations from linear phase cause phase distortion. A system with linear phase introduces equal delay across all frequencies, preserving the shape of the signal envelope.

6.5.2 Ideal Lowpass Filter

The ideal lowpass filter (LPF) with cutoff frequency \(\omega_c\) and passband gain \(K\) has frequency response \[ H_{LP}(\omega) = K\,e^{-j\omega t_d}\,\text{rect}\!\left(\frac{\omega}{2\omega_c}\right) = \begin{cases} K\,e^{-j\omega t_d} & |\omega| \leq \omega_c \\ 0 & |\omega| > \omega_c \end{cases}. \]

The impulse response of the ideal LPF is

\[ h_{LP}(t) = \mathcal{F}^{-1}\{H_{LP}(\omega)\} = \frac{K\omega_c}{\pi}\,\text{sinc}\!\left(\frac{\omega_c(t - t_d)}{\pi}\right) = \frac{K\omega_c}{\pi}\frac{\sin(\omega_c(t - t_d))}{\omega_c(t - t_d)/\pi \cdot \pi}. \]

More precisely, \(h_{LP}(t) = \frac{K\omega_c}{\pi}\cdot\frac{\sin(\omega_c(t-t_d))}{\omega_c(t-t_d)/\pi}\). Note that \(h_{LP}(t) \neq 0\) for \(t < t_d\) (and indeed for \(t < 0\) if \(t_d < \infty\)), meaning the ideal LPF is non-causal and hence physically unrealisable. This is the fundamental trade-off in filter design: ideal frequency selectivity requires non-causal impulse responses.

Analogous definitions apply for ideal highpass (\(H_{HP} = 1 - H_{LP}\)), bandpass, and bandstop filters.

6.6 Amplitude Modulation and Communication

Amplitude modulation (AM) illustrates the multiplication property of the Fourier transform. A message signal \(m(t)\) with spectrum \(M(\omega)\) concentrated near DC is multiplied by a carrier \(\cos(\omega_c t)\):

\[ x_{AM}(t) = m(t)\cos(\omega_c t). \]

By the modulation property:

\[ X_{AM}(\omega) = \frac{1}{2}[M(\omega - \omega_c) + M(\omega + \omega_c)]. \]

The baseband message spectrum is shifted up to the carrier frequency \(\omega_c\). Demodulation (recovery of \(m(t)\)) is accomplished by multiplying again by \(\cos(\omega_c t)\) and applying a lowpass filter to eliminate the double-frequency term.

6.7 The Discrete-Time Fourier Transform (DTFT)

The DTFT is the natural frequency-domain tool for DT signals.

The DTFT of a sequence \(x[n]\) is \[ X(e^{j\omega}) = \sum_{n=-\infty}^{\infty} x[n]\,e^{-j\omega n}, \quad \omega \in \mathbb{R}. \]

The inverse DTFT is

\[ x[n] = \frac{1}{2\pi}\int_{-\pi}^{\pi} X(e^{j\omega})\,e^{j\omega n}\,d\omega. \]

Key features of the DTFT:

  • \(X(e^{j\omega})\) is always periodic in \(\omega\) with period \(2\pi\), reflecting the periodicity of complex exponentials in discrete time.
  • The DTFT exists (converges absolutely) whenever \(\sum_n |x[n]| < \infty\).
  • Setting \(z = e^{j\omega}\) recovers the z-transform evaluated on the unit circle.

All the properties (linearity, shift, convolution, etc.) carry over from the Fourier transform with sums replacing integrals and \(2\pi\)-periodic spectra.


Chapter 7: Sampling and Reconstruction

7.1 The Sampling Theorem

Sampling is the process of converting a CT signal \(x(t)\) into a DT sequence \(x[n] = x(nT_s)\) by evaluating the signal at uniform instants spaced \(T_s\) seconds apart (the sampling period); the reciprocal \(f_s = 1/T_s\) is the sampling frequency (or sampling rate).

Ideal sampling can be modelled as multiplication by an impulse train:

\[ x_s(t) = x(t)\sum_{n=-\infty}^{\infty}\delta(t - nT_s) = \sum_{n=-\infty}^{\infty} x(nT_s)\,\delta(t - nT_s). \]

Taking the Fourier transform and using the fact that the Fourier transform of the impulse train \(\sum_n \delta(t - nT_s)\) is \(\frac{2\pi}{T_s}\sum_k \delta(\omega - k\omega_s)\) with \(\omega_s = 2\pi/T_s\):

\[ X_s(\omega) = \frac{1}{T_s}\sum_{k=-\infty}^{\infty} X(\omega - k\omega_s). \]

The spectrum \(X_s(\omega)\) consists of shifted copies of \(X(\omega)\) centred at multiples of \(\omega_s\).

Nyquist-Shannon Sampling Theorem. Let \(x(t)\) be a bandlimited signal with \(X(\omega) = 0\) for \(|\omega| > \omega_{\max}\). Then \(x(t)\) is completely determined by its samples \(x[n] = x(nT_s)\) if and only if \[ \omega_s = \frac{2\pi}{T_s} \geq 2\omega_{\max}, \]

i.e., the sampling frequency is at least twice the highest frequency in the signal. The minimum sampling rate \(f_s = 2f_{\max}\) is called the Nyquist rate.

(Sketch.) If \(\omega_s \geq 2\omega_{\max}\), the shifted copies \(X(\omega - k\omega_s)\) do not overlap (they are separated by at least \(\omega_s - 2\omega_{\max} \geq 0\)). Applying an ideal LPF with cutoff \(\omega_c = \omega_s/2\) and gain \(T_s\) to \(X_s(\omega)\) recovers \(X(\omega)\) exactly: \[ X(\omega) = T_s \cdot X_s(\omega) \cdot H_{LP}(\omega) \quad \text{for } \omega_s \geq 2\omega_{\max}. \]

If \(\omega_s < 2\omega_{\max}\), the shifted copies overlap, and it is impossible to recover \(X(\omega)\) from \(X_s(\omega)\) alone.

7.2 Aliasing

When the sampling rate is below the Nyquist rate, the shifted copies of \(X(\omega)\) in \(X_s(\omega)\) overlap — a phenomenon called aliasing. High-frequency components “fold” into the baseband and cannot be distinguished from legitimate low-frequency components.

Example: Aliased sinusoid. Let \(x(t) = \cos(2\pi f_0 t)\) with \(f_0 = 900\) Hz, sampled at \(f_s = 1000\) Hz (Nyquist rate \(= 1800\) Hz, so sampling is below Nyquist). The sampled sequence is \[ x[n] = \cos(2\pi \cdot 900 \cdot n/1000) = \cos(1.8\pi n). \]

Since \(\cos(1.8\pi n) = \cos((2\pi - 1.8\pi)n) = \cos(0.2\pi n)\), the sequence is identical to samples of a 100 Hz sinusoid. The 900 Hz component appears as 100 Hz — it has been aliased.

Anti-aliasing filters (lowpass filters applied before sampling) prevent aliasing by removing signal energy above \(\omega_s/2\) before the sampling operation.

7.3 Ideal Reconstruction

Given samples \(x[n] = x(nT_s)\) of a bandlimited signal sampled above the Nyquist rate, the original CT signal is reconstructed by passing the impulse train \(x_s(t) = \sum_n x[n]\delta(t - nT_s)\) through an ideal lowpass filter with gain \(T_s\) and cutoff \(\omega_s/2\):

\[ x(t) = \sum_{n=-\infty}^{\infty} x[n]\,h_{LP}(t - nT_s) = \sum_{n=-\infty}^{\infty} x[n]\,\text{sinc}\!\left(\frac{t - nT_s}{T_s}\right), \]

where \(\text{sinc}(u) = \sin(\pi u)/(\pi u)\). This sinc interpolation formula (Whittaker-Shannon reconstruction) expresses each sample as a weighted sinc function centred at its sampling instant, and the sum reconstructs the continuous waveform exactly.

In practice, ideal sinc interpolation is replaced by approximate methods (sample-and-hold, linear interpolation, spline interpolation) that introduce some distortion, compensated by a post-reconstruction equalisation filter.


Chapter 8: The Laplace Transform — Deeper Analysis

8.1 Bilateral vs. Unilateral Transform

The bilateral Laplace transform \(\mathcal{L}_B\{x\}(s) = \int_{-\infty}^{\infty}x(t)e^{-st}dt\) handles two-sided signals naturally. The ROC for a rational bilateral transform is always a vertical strip. Two different signals can share the same algebraic form for \(X(s)\) but have different ROCs, giving different time-domain inverses. For example, \(X(s) = 1/(s+1)\) with ROC \(\text{Re}(s) > -1\) corresponds to \(e^{-t}u(t)\), while with ROC \(\text{Re}(s) < -1\) it corresponds to \(-e^{-t}u(-t-1) + \delta(t)\)… actually \(-e^{-t}u(-t)\).

The unilateral transform \(\mathcal{L}_U\{x\}(s) = \int_{0^-}^{\infty}x(t)e^{-st}dt\) is used for causal signals and initial-condition problems. Here the ROC is always a right half-plane. The differentiation property for the unilateral transform automatically incorporates initial conditions:

\[ \mathcal{L}_U\left\{\frac{d^n x}{dt^n}\right\} = s^n X(s) - s^{n-1}x(0^-) - s^{n-2}x'(0^-) - \cdots - x^{(n-1)}(0^-). \]

This is what makes the unilateral transform powerful for solving differential equations with non-zero initial conditions: applying \(\mathcal{L}_U\) to both sides of the ODE immediately converts it to an algebraic equation in \(s\).

8.2 Poles, Zeros, and System Behaviour

The transfer function

\[ H(s) = K\frac{\prod_{k=1}^{M}(s - z_k)}{\prod_{k=1}^{N}(s - p_k)} \]

is characterised by its zeros \(z_1, \ldots, z_M\) and poles \(p_1, \ldots, p_N\). The pole-zero plot (a plot in the complex \(s\)-plane marking poles with \(\times\) and zeros with \(\circ\)) gives immediate qualitative insight:

  • Poles in the open left half-plane (\(\text{Re}(p_k) < 0\)) produce decaying exponential modes in \(h(t)\): stable.
  • Poles on the imaginary axis produce undamped sinusoidal modes: marginally stable (output bounded but not decaying).
  • Poles in the right half-plane (\(\text{Re}(p_k) > 0\)) produce growing exponentials: unstable.
  • Complex conjugate poles at \(s = -\alpha \pm j\beta\) produce damped oscillatory modes \(e^{-\alpha t}\cos(\beta t)\) in \(h(t)\).

The distance from a pole to the imaginary axis \(\sigma = |\text{Re}(p)|\) determines the decay rate; the imaginary part \(\beta = |\text{Im}(p)|\) is the oscillation frequency.

8.3 Second-Order Systems

The prototype second-order system has transfer function

\[ H(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}, \]

parameterised by the natural frequency \(\omega_n > 0\) and damping ratio \(\zeta \geq 0\). The characteristic polynomial \(s^2 + 2\zeta\omega_n s + \omega_n^2\) has roots

\[ p_{1,2} = -\zeta\omega_n \pm \omega_n\sqrt{\zeta^2 - 1}. \]
  • Overdamped (\(\zeta > 1\)): two distinct real negative poles; step response is monotone.
  • Critically damped (\(\zeta = 1\)): repeated real pole at \(-\omega_n\); fastest monotone response.
  • Underdamped (\(0 < \zeta < 1\)): complex conjugate poles at \(-\zeta\omega_n \pm j\omega_d\) with \(\omega_d = \omega_n\sqrt{1-\zeta^2}\) (damped natural frequency); oscillatory step response.
  • Undamped (\(\zeta = 0\)): poles on imaginary axis; sustained oscillations.

The step response of the underdamped second-order system is

\[ y(t) = \left[1 - \frac{e^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\sin(\omega_d t + \phi)\right]u(t), \quad \phi = \arccos(\zeta). \]

Chapter 9: The Z-Transform — Deeper Analysis

9.1 Region of Convergence and Signal Type

The ROC of the z-transform carries information about whether the signal is right-sided, left-sided, or two-sided:

  • Right-sided signal (zero for \(n < n_0\)): ROC is \(|z| > r_-\) (exterior of a disk).
  • Left-sided signal (zero for \(n > n_0\)): ROC is \(|z| < r_+\) (interior of a disk).
  • Two-sided signal: ROC is an annulus \(r_- < |z| < r_+\) (may be empty if \(r_- \geq r_+\)).
  • Finite-duration signal: ROC is all of \(\mathbb{C}\) except possibly \(z = 0\) or \(z = \infty\).

The ROC cannot contain any poles of \(X(z)\). Different ROCs for the same \(X(z)\) lead to different time-domain sequences.

9.2 Inverse Z-Transform

Three methods:

  1. Partial fractions: Expand \(X(z)/z\) (write \(X(z) = \sum_k C_k z/(z - \gamma_k)\), invert term by term).

  2. Power series expansion: Long division of \(X(z)\) in powers of \(z^{-1}\) (for right-sided signals) or \(z\) (for left-sided): the coefficients of \(z^{-n}\) directly give \(x[n]\).

  3. Contour integral: The formal inversion integral \(x[n] = \frac{1}{2\pi j}\oint_C X(z)z^{n-1}dz\) (integral over a contour in the ROC) equals the sum of residues of \(X(z)z^{n-1}\) at poles inside \(C\).

Example: Power series method. Find \(\mathcal{Z}^{-1}\{X(z)\}\) for \(X(z) = \frac{1}{1 - 0.5z^{-1}}\), ROC: \(|z| > 0.5\).

Expand as a geometric series: \(X(z) = \sum_{n=0}^{\infty}(0.5)^n z^{-n}\). By the definition of the z-transform (as a power series in \(z^{-1}\)), we read off \(x[n] = (0.5)^n u[n]\). This is consistent with the table entry \(a^n u[n] \xleftrightarrow{\mathcal{Z}} z/(z-a)\), which rearranges to \(1/(1-az^{-1})\).

9.3 Poles, Stability, and Frequency Response of DT Systems

For a causal DT LTI system:

  • Poles strictly inside the unit circle (\(|\gamma_k| < 1\)): decaying modes \(\gamma_k^n u[n]\) → BIBO stable.
  • Poles on the unit circle: undamped oscillations → marginally stable.
  • Poles outside the unit circle (\(|\gamma_k| > 1\)): growing modes → unstable.

The DTFT of the impulse response is \(H(e^{j\omega}) = H(z)|_{z=e^{j\omega}}\), valid when the unit circle is in the ROC (i.e., for BIBO-stable causal systems). The geometric interpretation: \(|H(e^{j\omega})|\) at a given \(\omega\) is proportional to the product of distances from \(e^{j\omega}\) to the zeros divided by the product of distances to the poles. Poles near the unit circle create peaks in the frequency response; zeros on the unit circle create nulls.

9.4 Block Diagrams and System Realization

A rational transfer function \(H(z) = B(z)/A(z)\) can be realized in hardware or software using delay elements (unit delays \(z^{-1}\)), multipliers, and adders. The canonical forms are:

  • Direct Form I: Implement numerator (zeros) and denominator (poles) as separate filter sections cascaded.
  • Direct Form II (canonic form): Share delay elements between the two sections, halving the memory requirements.
  • Cascade form: Factor \(H(z)\) into first- and second-order sections connected in series.
  • Parallel form: Partial-fraction-expand \(H(z)\) into a sum of first- and second-order sections.

Chapter 10: Additional Topics — Fourier Transform System Analysis

10.1 Interconnected Systems in the Frequency Domain

The convolution theorem makes frequency-domain analysis of cascaded systems trivial: \(Y(\omega) = H_1(\omega)H_2(\omega)X(\omega)\). Parallel systems give \(Y(\omega) = [H_1(\omega) + H_2(\omega)]X(\omega)\). Feedback systems (in the frequency domain, with all signals Fourier-transformable) yield

\[ Y(\omega) = \frac{H_1(\omega)}{1 + H_1(\omega)H_2(\omega)}X(\omega). \]

10.2 Practical Filters

Ideal filters (sharp cutoff, linear phase) are physically unrealizable due to their non-causal impulse responses. Practical filter design approximates the ideal characteristics. Major classical design approaches:

  • Butterworth filters: Maximally flat magnitude response in the passband; monotone rolloff; poles on a circle in the left half-plane.
  • Chebyshev filters (Type I): Equiripple in the passband, monotone in the stopband.
  • Chebyshev filters (Type II): Monotone in the passband, equiripple in the stopband.
  • Elliptic (Cauer) filters: Equiripple in both bands; sharpest possible rolloff for given specifications.

The order \(N\) of the filter determines the slope of the rolloff (approximately \(-20N\) dB/decade for Butterworth). Higher order gives sharper cutoff but more phase distortion and greater implementation complexity.

10.3 The Relationship Between Laplace and Fourier Transforms

The Fourier transform is a special case of the Laplace transform evaluated on the imaginary axis: \(X(\omega) = X_L(s)|_{s=j\omega}\), provided the imaginary axis lies within the ROC. For signals with \(E = \int|x(t)|^2 dt < \infty\) (square-integrable signals), the ROC of the bilateral Laplace transform always includes the imaginary axis, and both transforms exist and are related by \(s = j\omega\).

This relationship is the bridge: Laplace theory provides the poles-and-ROC picture (useful for transient analysis and stability), while Fourier theory provides the spectral/frequency picture (useful for filtering and modulation).

10.4 Relationship Between Z-Transform and DTFT

The DTFT is the z-transform on the unit circle: \(X(e^{j\omega}) = X(z)|_{z = e^{j\omega}}\), valid when the unit circle is in the ROC. For stable causal sequences this is always the case. For unstable sequences the DTFT may not exist in the classical sense (though it may exist as a distribution).

The z-transform variable \(z = re^{j\omega}\) generalises the DTFT by introducing the radial variable \(r\): moving \(r\) away from 1 corresponds to windowing the sequence by a decaying exponential \(r^{-n}\) (analogous to how the Laplace \(\sigma\) introduces exponential convergence factors).


Appendix A: Mathematical Prerequisites

A.1 Complex Numbers and Exponentials

Euler’s formula \(e^{j\theta} = \cos\theta + j\sin\theta\) is the foundation for representing sinusoids as complex exponentials. Key identities:

\[ \cos\theta = \frac{e^{j\theta} + e^{-j\theta}}{2}, \qquad \sin\theta = \frac{e^{j\theta} - e^{-j\theta}}{2j}. \]

For a complex number \(z = x + jy = re^{j\theta}\): modulus \(|z| = \sqrt{x^2+y^2}\), argument \(\angle z = \arctan(y/x)\), conjugate \(z^* = x - jy = re^{-j\theta}\).

A.2 Partial Fraction Decomposition — Extended Examples

Example: Complex poles. Consider \(X(s) = \frac{s+3}{s^2+2s+5}\). The denominator factors as \((s+1)^2 + 4\), so poles are at \(s = -1 \pm 2j\). Rewrite the numerator: \[ s + 3 = (s+1) + 2. \]

Thus

\[ X(s) = \frac{s+1}{(s+1)^2+4} + \frac{2}{(s+1)^2+4}. \]

Using the table: \(\mathcal{L}^{-1}\{(s+1)/((s+1)^2+4)\} = e^{-t}\cos(2t)u(t)\) and \(\mathcal{L}^{-1}\{2/((s+1)^2+4)\} = e^{-t}\sin(2t)u(t)\). Hence

\[ x(t) = e^{-t}(\cos(2t) + \sin(2t))u(t) = \sqrt{2}\,e^{-t}\cos(2t - \pi/4)\,u(t). \]

A.3 Geometric Series

The finite geometric series \(\sum_{n=0}^{N-1} a^n = (1 - a^N)/(1-a)\) for \(a \neq 1\) and the infinite geometric series \(\sum_{n=0}^{\infty} a^n = 1/(1-a)\) for \(|a| < 1\) appear throughout z-transform computations. Every rational z-transform can be interpreted as a generating function of a geometric-type sequence.

A.4 Dirichlet Kernel and Fourier Series Convergence

The partial Fourier series sum \(S_K(t) = \sum_{k=-K}^{K} c_k e^{jk\omega_0 t}\) can be written as a convolution:

\[ S_K(t) = \frac{1}{T_0}\int_{T_0} x(\tau)\,D_K(t - \tau)\,d\tau, \]

where the Dirichlet kernel is

\[ D_K(t) = \sum_{k=-K}^{K} e^{jk\omega_0 t} = \frac{\sin((2K+1)\omega_0 t/2)}{\sin(\omega_0 t/2)}. \]

As \(K \to \infty\), \(D_K(t)/T_0 \to \delta(t)\) (in the distributional sense), and \(S_K(t) \to x(t)\) at continuity points. The Gibbs overshoot arises because the Dirichlet kernel has oscillatory sidelobes of fixed relative magnitude that do not diminish as \(K \to \infty\).


Appendix B: Key Formula Summary

B.1 Continuous-Time Fourier Series

\[ x(t) = \sum_{k=-\infty}^{\infty} c_k e^{jk\omega_0 t}, \qquad c_k = \frac{1}{T_0}\int_{T_0} x(t)e^{-jk\omega_0 t}dt. \]

Parseval: \(\frac{1}{T_0}\int_{T_0}|x(t)|^2 dt = \sum_k|c_k|^2\).

B.2 Continuous-Time Fourier Transform

\[ X(\omega) = \int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt, \qquad x(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty}X(\omega)e^{j\omega t}d\omega. \]

Parseval: \(\int_{-\infty}^{\infty}|x(t)|^2 dt = \frac{1}{2\pi}\int_{-\infty}^{\infty}|X(\omega)|^2 d\omega\).

Convolution: \(\mathcal{F}\{x*h\} = X(\omega)H(\omega)\).

B.3 Laplace Transform

\[ X(s) = \int_{-\infty}^{\infty}x(t)e^{-st}dt, \qquad s = \sigma + j\omega. \]

Differentiation (unilateral): \(\mathcal{L}\{x^{(n)}\} = s^n X(s) - \sum_{k=0}^{n-1}s^{n-1-k}x^{(k)}(0^-)\).

Transfer function: \(H(s) = Y(s)/X(s)\)|_{zero IC}\( = B(s)/A(s)\).

BIBO stable \(\Leftrightarrow\) all poles of \(H(s)\) have \(\text{Re}(p) < 0\).

B.4 Z-Transform

\[ X(z) = \sum_{n=-\infty}^{\infty}x[n]z^{-n}. \]

Time shift: \(\mathcal{Z}\{x[n-k]\} = z^{-k}X(z)\).

Convolution: \(\mathcal{Z}\{x*h\} = X(z)H(z)\).

BIBO stable (causal) \(\Leftrightarrow\) all poles of \(H(z)\) satisfy \(|p| < 1\).

B.5 Sampling

Nyquist rate: \(f_s \geq 2f_{\max}\) (sampling at twice the highest frequency).

Aliased frequency: a sinusoid at \(f_0 > f_s/2\) aliases to \(f_0 - \lfloor f_0/f_s + 1/2\rfloor f_s\).

Reconstruction: \(x(t) = \sum_n x[nT_s]\,\text{sinc}((t - nT_s)/T_s)\).

B.6 System Properties Summary

PropertyCT LTI conditionDT LTI condition
Causal\(h(t) = 0\) for \(t < 0\)\(h[n] = 0\) for \(n < 0\)
BIBO stable\(\int\lvert h(t)\rvert\,dt < \infty\)\(\sum\lvert h[n]\rvert < \infty\)
Stable (via transfer fn)All poles in OLHPAll poles inside unit circle
Memoryless\(h(t) = c\delta(t)\)\(h[n] = c\delta[n]\)
Invertible\(H(s)H_{inv}(s) = 1\)\(H(z)H_{inv}(z) = 1\)
Back to top