ECE 203: Probability Theory and Statistics 1
Shechem Kandeepan Sumanthiran
Estimated study time: 5 minutes
Table of contents
Sources and References
Equivalent UW courses — STAT 230 (Probability), STAT 231 (Statistics), STAT 240 (Honours Probability) Primary textbook — Sheldon Ross, A First Course in Probability, 10th ed., Pearson, 2019. Supplementary references — Geoffrey Grimmett and David Stirzaker, Probability and Random Processes, 4th ed., Oxford, 2020; Alberto Leon-Garcia, Probability, Statistics, and Random Processes for Electrical Engineering, 3rd ed., Pearson, 2008.
Equivalent UW Courses
ECE 203 is the first half of the ECE probability / statistics sequence (ECE 307 is the statistics-heavy follow-up). Its content lines up almost exactly with STAT 230 — axioms, conditional probability, random variables, joint distributions, expectation, moments, and the CLT — and the honours analogue STAT 240 covers the same topics at a higher level of mathematical rigour. STAT 231 is the statistics course that follows STAT 230 and maps onto ECE 307 rather than ECE 203. A student who has taken STAT 230 (or STAT 240) can treat ECE 203 as essentially the same course re-presented from an engineering angle.
What This Course Adds Beyond the Equivalents
- Engineering-oriented examples. Random-variable and joint-distribution problems are motivated by noise in communication channels, reliability of components, and signal-plus-noise models. Jointly Gaussian random variables get explicit treatment because they dominate downstream ECE courses (detection, estimation, random signals).
- Preview of random processes (time-permitting). A short introduction at the end that STAT 230 does not include; full treatment is in STAT 333 / ECE 313.
- Omits / differs, relative to STAT 230: almost no statistics (no estimators, no hypothesis testing, no likelihood) — those move to ECE 307. Relative to STAT 240: fewer proofs and less measure-theoretic framing; generating functions are used as computational tools rather than developed formally.
Topic Summary
Axioms, Sample Spaces, and Counting
The Kolmogorov axioms on a sample space \( \Omega \), events as subsets, and probability as a measure. Equally-likely outcomes and the combinatorial counting toolkit: permutations, combinations, inclusion-exclusion.
Conditional Probability and Independence
Definition \( P(A \mid B) = P(A \cap B)/P(B) \), the multiplication rule, the law of total probability, and Bayes’ theorem. Independence of events and its failure modes (pairwise vs mutual independence).
Discrete Random Variables
PMF, CDF, and expectation. Standard families: Bernoulli, binomial, geometric, negative binomial, hypergeometric, Poisson. Variance and higher moments; the Poisson approximation to the binomial.
Continuous Random Variables
PDF and CDF, expectation via
\[ E[X] = \int_{-\infty}^{\infty} x\,f_X(x)\,dx \]Standard continuous families: uniform, exponential, gamma, normal. Transformations of a single random variable via the change-of-variables formula.
Joint Distributions
Joint PMF / PDF for pairs and collections of random variables. Marginals, conditional distributions, and independence. Sums of independent random variables via convolution; the distribution of \( X + Y \) for standard pairings (e.g. sums of independent Poissons, sums of independent normals).
Expectation Properties, Covariance, Correlation
Linearity of expectation (with or without independence), variance of a sum, covariance \( \operatorname{Cov}(X,Y) \), and the correlation coefficient. Conditional expectation \( E[X \mid Y] \), the tower property, and variance decomposition.
Moment Generating Functions
The MGF \( M_X(t) = E[e^{tX}] \) as a bookkeeping device for moments and as a tool for identifying distributions of sums of independent random variables (since MGFs multiply).
Jointly Gaussian Random Variables
Bivariate and multivariate normal density, mean vector and covariance matrix, and the fact that uncorrelated jointly Gaussian variables are independent. This section is expanded beyond STAT 230 because jointly Gaussian models are ubiquitous in communications and signal processing.
Inequalities and Limit Theorems
Markov and Chebyshev inequalities, the weak law of large numbers, and the central limit theorem: if \( X_1, X_2, \ldots \) are i.i.d. with mean \( \mu \) and variance \( \sigma^2 \), then
\[ \frac{\sqrt n\,(\bar X_n - \mu)}{\sigma} \;\xrightarrow{d}\; \mathcal N(0,1) \]Confidence intervals are introduced as an immediate application of the CLT, with the statistical-inference theory deferred to ECE 307.
Introduction to Random Processes (time permitting)
Basic definitions: a random process as an indexed family of random variables; mean and autocorrelation functions; stationarity. Serves as a bridge to ECE 313 / STAT 333.