ECE 451: Software Requirements Specification and Analysis
Byron Weber Becker
Estimated study time: 46 minutes
Table of contents
Sources and References
Primary references — A. van Lamsweerde, Requirements Engineering: From System Goals to UML Models to Software Specifications, Wiley, 2009; K. Pohl, Requirements Engineering: Fundamentals, Principles, and Techniques, Springer, 2010. Supplementary texts — K. E. Wiegers and J. Beatty, Software Requirements, 3rd ed., Microsoft Press, 2013; G. Kotonya and I. Sommerville, Requirements Engineering, Wiley, 1998; C. Larman, Applying UML and Patterns, 3rd ed., Prentice Hall, 2004. Online resources — IEEE Std 830-1998 (Software Requirements Specifications recommended practice); IREB Certified Professional for Requirements Engineering (CPRE) foundation-level syllabus.
Chapter 1: Introduction to Requirements Engineering
1.1 Why Requirements Analysis Is Hard
Software development fails more often at the requirements stage than at any other. The Standish Group’s CHAOS Report consistently identifies “incomplete requirements” and “lack of user involvement” among the top causes of project failure. Yet requirements engineering receives far less academic attention than algorithm design or compiler construction.
The difficulty is not technical — it is epistemological and social. Requirements engineering asks: what problem are we actually solving, for whom, and under what constraints? These questions are entangled with organizational politics, tacit domain knowledge, conflicting stakeholder goals, and the fundamental inability of most people to articulate what they need before they have seen a prototype.
Several forces compound the difficulty:
- Incomplete knowledge: Stakeholders often do not know what they want, or know only partial slices of a complex whole.
- Tacit knowledge: Domain experts hold knowledge that they cannot easily verbalize because it has become automatic.
- Volatility: Requirements change because the business changes, because stakeholders learn from early system versions, or because the competitive landscape shifts.
- Conflicting stakeholders: Different groups have legitimately different goals that may be mutually inconsistent.
- Ambiguity of natural language: Ordinary English sentences are systematically ambiguous, vague, or open to multiple interpretations.
- The specification gap: There is always a distance between what a stakeholder intends and what a developer reads in the same document.
1.2 The Cost of Requirements Errors
Fixing a requirements defect after deployment costs roughly 100 times more than catching it during requirements analysis. The intuition is that each stage of the lifecycle — design, implementation, test, deployment — embeds the defect deeper and builds more artifacts on top of it that must be reworked.
This motivates the core goal of the course: to study systematic, model-based approaches to eliciting, representing, analyzing, and validating requirements so that defects are caught as early as possible.
1.3 Course Philosophy
The course treats requirements engineering as an engineering discipline, not an art. This means:
- Requirements are expressed in precise notations (UML, OCL, state machines, temporal logic) that support formal analysis.
- Models are constructed at multiple levels of abstraction and must be mutually consistent.
- The process of elicitation, analysis, and validation is structured and repeatable.
- Deliverables — the Software Requirements Specification (SRS) — are artifacts that can be inspected, measured, and verified.
Chapter 2: The Requirements Engineering Reference Model
2.1 The Problem Frame
Michael Jackson’s problem frames approach provides a foundational vocabulary. The world can be divided into:
- The domain: the portion of the real world in which the problem exists and with which the system will interact.
- The machine: the software system to be built.
- The requirements: descriptions of desired behaviors of the domain, achievable by the machine’s operation.
The central insight: Requirements hold over the real world; specifications describe the machine. The machine can only satisfy requirements if the domain assumptions hold. Formally, if D denotes domain properties, S the specification, and R the requirements:
\[ D \wedge S \Rightarrow R \]Any requirements document that omits domain assumptions is incomplete.
2.2 Context Diagrams
A context diagram shows the system under development (SUD) at its center, surrounded by external agents — people, organizations, hardware devices, or other software systems — with labeled interfaces between them.
Each interface is characterized by:
- Monitorable inputs: phenomena in the environment that the system can observe but not control. Example: a sensor reading a room temperature.
- Controllable outputs: phenomena that the system can affect. Example: a command sent to a thermostat actuator.
- Shared phenomena: events or data visible to both the system and an external agent.
The context diagram serves two purposes: it fixes the system boundary (what is inside versus outside the machine) and it identifies all stakeholders and external systems that must be consulted during elicitation.
2.3 Requirements vs. Specifications: A Deeper Look
A common confusion in practice conflates what the system should do (requirements) with how the system should do it (design) and what interfaces the system exposes (specifications).
| Concept | Predicated over | Written by | Verified against |
|---|---|---|---|
| Requirement | Real-world phenomena | Stakeholders + analysts | Acceptance tests in the real world |
| Specification | Software behavior | Analysts + architects | Unit and integration tests |
| Domain assumption | Environment only | Domain experts | Not verified by the system |
Understanding these distinctions prevents requirements documents from smuggling in design decisions prematurely, which forecloses implementation choices unnecessarily.
Chapter 3: Domain Modelling
3.1 Purpose and Scope
Domain modelling creates a precise structural description of the environment in which the system will operate. It answers the question: what entities, relationships, and constraints exist in the problem domain — independent of any software solution? A rigorous domain model prevents implementation-level confusion by establishing a shared vocabulary and ensuring that the system is built to serve real-world entities correctly.
3.2 Entity-Relationship Diagrams
The classical Entity-Relationship (ER) model captures:
- Entities: things in the domain with independent existence (e.g., Customer, Order, Product).
- Attributes: properties of entities (e.g., Customer.name, Order.date).
- Relationships: associations between entities, characterized by cardinality (one-to-one, one-to-many, many-to-many) and participation (mandatory vs. optional).
ER diagrams are a good starting point for domain modelling but lack expressive power for behavioral or constraint-heavy domains. UML class diagrams are preferred in this course.
3.3 UML Class Diagrams
3.3.1 Core Notation
A UML class diagram shows classes as rectangles divided into three compartments: class name, attributes, and operations. Relationships between classes are shown as lines with specific adornments:
- Association: a structural relationship. Labeled with a role name at each end. Multiplicity is expressed as
1,0..1,*,1..*, or specific ranges like2..5. - Aggregation (hollow diamond): a whole-part relationship where the part can exist without the whole.
- Composition (filled diamond): a strong whole-part relationship; the part cannot exist without the whole, and the whole owns the part’s lifecycle.
- Generalization (open arrowhead): an is-a relationship (subclass to superclass).
- Dependency (dashed arrow): a weaker relationship indicating that one class uses another.
3.3.2 Example: Library Domain
Consider a public library domain. The class diagram might include:
Librarywith attributesname: String,address: StringBookwith attributesisbn: String,title: String,author: StringCopywith attributescopyId: String,condition: ConditionMemberwith attributesmemberId: String,name: String,email: StringLoanwith attributesloanDate: Date,dueDate: Date,returnDate: Date [0..1]
Key relationships: Library composes Copy (a copy belongs to exactly one library and is destroyed when the library closes); Loan associates Copy and Member; Book has zero or more Copy instances.
3.3.3 Object Diagrams
An object diagram is a snapshot of the class diagram at a particular instant: it shows specific objects (instances) and their link values. Object diagrams are invaluable for checking whether a class diagram is sensible — if you cannot draw a realistic object diagram consistent with the class diagram, the class diagram is probably wrong.
3.4 OCL: Object Constraint Language
The UML class diagram notation alone is insufficiently expressive for many domain constraints. OCL is a formal, side-effect-free language for expressing constraints on UML models.
3.4.1 OCL Context and Invariants
An OCL invariant is a Boolean expression that must be true for all instances of a class at all times.
context Copy
inv: self.condition <> Condition::Destroyed implies self.library <> null
This invariant states that a Copy that is not destroyed must belong to a library.
3.4.2 OCL Navigation
OCL uses dot notation to navigate associations. Given a Loan object l:
l.copynavigates to theCopyinvolved in the loan.l.copy.booknavigates to the correspondingBook.l.member.loansnavigates to all loans for the same member (a collection).
3.4.3 Collection Operations
OCL provides operations over collections:
| Operation | Meaning |
|---|---|
col->size() | Number of elements |
col->isEmpty() | True if empty |
col->includes(x) | True if x is in collection |
col->select(expr) | Subset satisfying expr |
col->collect(expr) | New collection by mapping |
| `col->forAll(v | expr)` |
| `col->exists(v | expr)` |
3.4.4 Pre- and Post-conditions in OCL
OCL can also express pre- and post-conditions on operations:
context Loan::return()
pre: self.returnDate = null
post: self.returnDate = Date::today()
Pre-conditions state what must be true when an operation is called; post-conditions state what must be true after it returns.
3.4.5 Extended Example
context Member
inv memberLoanLimit:
self.loans->select(l | l.returnDate = null)->size() <= 5
context Copy
inv noSimultaneousLoans:
self.loans->select(l | l.returnDate = null)->size() <= 1
The first invariant limits each member to at most five active loans. The second ensures a copy can be on loan to at most one member at a time.
Chapter 4: Functional Modelling
4.1 Functions as Modelling Notations
A functional model describes what the system does in terms of transformations from inputs to outputs, without specifying how these transformations are implemented. The primary tools are:
- Pre- and post-condition specifications for individual operations.
- Use-case diagrams for high-level functional decomposition.
- Functions defined over class diagrams, showing how operations create, update, or query objects.
4.2 Pre- and Post-conditions
The notation @pre in OCL refers to the value of an attribute before the operation executed:
context BankAccount::withdraw(amount: Real)
pre: amount > 0 and self.balance >= amount
post: self.balance = self.balance@pre - amount
4.3 Use-Case Diagrams
Use-case diagrams provide a high-level functional map of a system. They show:
- Actors: external entities (human or system) that interact with the system. Actors are drawn outside the system boundary.
- Use cases: units of functionality that deliver value to an actor. Drawn as ovals inside the system boundary.
- Associations: lines between actors and the use cases they participate in.
- Include relationship (
<<include>>): one use case always invokes another as a sub-step. - Extend relationship (
<<extend>>): one use case conditionally extends another at a defined extension point. - Generalization: an actor or use case that specializes another.
4.3.1 Use-Case Descriptions
A use-case diagram alone is not sufficient. Each use case must be elaborated with a textual template:
| Field | Content |
|---|---|
| Use Case Name | Brief verb-noun phrase |
| Primary Actor | Who initiates |
| Preconditions | What must be true before |
| Main Success Scenario | Numbered steps |
| Extensions | Alternative flows from specific step numbers |
| Postconditions | What is true after success |
| Priority | High / Medium / Low |
| Frequency | Estimated invocation rate |
4.4 Functions over a Class Diagram
An operation defined over a class diagram is a function from a pre-state (an object diagram satisfying all invariants) plus input parameters to a post-state (another valid object diagram) plus outputs. This view enforces that operations preserve class diagram invariants, which prevents a common error: defining operations that violate structural domain properties.
Chapter 5: Behavioural Modelling
5.1 Dynamic Behaviour
Structural and functional models capture what exists and what transformations are possible, but they do not capture ordering constraints: some operations can only occur in certain sequences; some events must be responded to within time bounds; some state combinations are forbidden. Behavioural modelling addresses these temporal ordering constraints.
5.2 Use Cases, Scenarios, and Sequence Diagrams
5.2.1 Scenarios
A scenario is a concrete, end-to-end narrative of a particular interaction with the system. Scenarios are invaluable for elicitation (concrete stories are easier for stakeholders to evaluate than abstract descriptions) and for validation (they become the basis for acceptance tests).
5.2.2 Sequence Diagrams
A UML sequence diagram shows a specific scenario as a series of messages exchanged between objects (or actors) over time.
Key notation elements:
- Lifelines: vertical dashed lines, each representing an object or actor, labeled at the top with the instance name and/or class.
- Messages: horizontal arrows between lifelines, labeled with the message name and parameters. A filled arrowhead indicates a synchronous call; an open arrowhead indicates an asynchronous message; a dashed arrow indicates a return.
- Activation bars: thin rectangles on a lifeline showing when an object is actively executing.
- Combined fragments: rectangular frames with an operator label (alt, opt, loop, par, ref) to express conditional, optional, repeated, parallel, or referenced interactions.
- alt: alternative paths (like if-else), each guarded by a Boolean condition in brackets.
- opt: optional block (like if with no else).
- loop: repeated execution, with optional min/max bounds:
loop(1, 5). - par: parallel execution of contained interactions.
- ref: reference to another named sequence diagram.
5.3 State Machine Models
5.3.1 Basic State Machines
A finite state machine (FSM) consists of:
- A finite set of states \( Q \)
- An initial state \( q_0 \in Q \)
- An alphabet of events \( \Sigma \)
- A transition function \( \delta : Q \times \Sigma \to Q \)
- Optionally, a set of final states \( F \subseteq Q \)
UML state machines extend FSMs with guards, actions, and hierarchy.
5.3.2 UML State Machine Notation
A UML state machine uses:
- States: rounded rectangles. Special pseudostates: filled circle for initial, bull’s-eye for final.
- Transitions: arrows labeled
event [guard] / action. The guard is a Boolean condition in square brackets; the action is executed when the transition fires. - Internal activities: within a state, three reserved labels:
entry /,exit /,do /. Entry actions execute on entering the state; exit actions on leaving; do-activities are ongoing while in the state and are interrupted on exit.
5.3.3 Hierarchical States (State Hierarchy)
A composite state contains nested substates. When the system is in a substate, it is simultaneously in the enclosing composite state. This allows shared transitions: a transition from a composite state applies to all substates, capturing the “when in any of these substates, this event causes this effect” pattern without duplicating arrows.
A history pseudostate (H or H*) records the last active substate so that re-entering the composite state resumes where it left off. H records shallow history (remembers only the immediately enclosed substate); H* records deep history (recursively remembers the full nested state configuration).
5.3.4 Orthogonal Regions
A state with two or more orthogonal regions models concurrency: the system is simultaneously in one substate of each region. Orthogonal regions are separated by dashed lines within the composite state.
Orthogonal regions communicate through broadcast events: a transition in one region can raise an event that triggers a transition in another.
5.3.5 Communication and Activities
Transitions can:
- Send events to other state machines:
/ send Target.event(params) - Call operations on the containing object
- Assign values to attributes
Do-activities model long-running operations that proceed while in a state. They are interrupted (and their effects are rolled back if appropriate) when the state is exited. A completion transition (no event label) fires when the do-activity finishes normally.
5.3.6 Semantics: Run-to-Completion
UML state machines follow the run-to-completion (RTC) semantics: at any instant, exactly one event is being processed, and the transition sequence it triggers completes atomically before the next event is processed. This simplifies reasoning about state machine behavior by preventing interleaving of transition actions.
Chapter 6: Constraint Modelling
6.1 Temporal Logic
Temporal logic extends classical propositional or predicate logic with operators that talk about time. It is used in requirements engineering to express ordering constraints, safety properties, and liveness properties in a precise, analyzable notation.
6.1.1 Linear Temporal Logic (LTL)
LTL interprets formulas over infinite sequences of states (or time points). Time is linear: at each point, there is exactly one future.
Core LTL operators:
| Operator | Notation | Reading | Semantics |
|---|---|---|---|
| Next | \( \mathbf{X}\, \varphi \) | “next \(\varphi\)” | \(\varphi\) holds at the next time step |
| Globally | \( \mathbf{G}\, \varphi \) | “always \(\varphi\)” | \(\varphi\) holds at all future time steps (including now) |
| Finally | \( \mathbf{F}\, \varphi \) | “eventually \(\varphi\)” | \(\varphi\) holds at some future time step |
| Until | \( \varphi \,\mathbf{U}\, \psi \) | “\(\varphi\) until \(\psi\)” | \(\varphi\) holds continuously until \(\psi\) becomes true; \(\psi\) must eventually hold |
| Weak Until | \( \varphi \,\mathbf{W}\, \psi \) | “\(\varphi\) weak until \(\psi\)” | Like Until but \(\psi\) need not ever hold |
| Release | \( \varphi \,\mathbf{R}\, \psi \) | “\(\varphi\) releases \(\psi\)” | \(\psi\) holds until and including the point where \(\varphi\) holds (or forever if \(\varphi\) never holds) |
Safety properties assert that “nothing bad ever happens”:
\[ \mathbf{G}\,(\text{request} \Rightarrow \mathbf{F}\,\text{grant}) \]This expresses that every request is eventually granted (actually a liveness property). A pure safety example:
\[ \mathbf{G}\,\neg(\text{doorOpen} \wedge \text{engineRunning}) \]The door must never be open while the engine is running.
Liveness properties assert that “something good eventually happens”:
\[ \mathbf{G}\,(\text{submitted} \Rightarrow \mathbf{F}\,\text{acknowledged}) \]Every submitted transaction is eventually acknowledged.
6.1.2 Computation Tree Logic (CTL)
CTL interprets formulas over computation trees: at each state, there may be multiple possible futures (branching time). Operators are quantified over paths:
- A (for All paths) and E (for some/Exists a path) are path quantifiers, always paired with a temporal operator.
| CTL Formula | Reading |
|---|---|
| \( \mathbf{AG}\,\varphi \) | On all paths, always \(\varphi\) |
| \( \mathbf{EG}\,\varphi \) | There exists a path on which \(\varphi\) always holds |
| \( \mathbf{AF}\,\varphi \) | On all paths, eventually \(\varphi\) |
| \( \mathbf{EF}\,\varphi \) | There exists a path on which \(\varphi\) eventually holds |
| \( \mathbf{AX}\,\varphi \) | On all paths, \(\varphi\) holds next |
| \( \mathbf{A}[\varphi\,\mathbf{U}\,\psi] \) | On all paths, \(\varphi\) until \(\psi\) |
6.2 Specification Patterns
Dwyer et al. catalogued reusable specification patterns for common types of requirements. Patterns are parameterized by a scope (the portion of execution over which the property must hold) and a property type.
Common scopes:
- Globally: over the entire execution.
- Before R: before the first occurrence of event R.
- After Q: after the first occurrence of event Q.
- Between Q and R: in every segment between consecutive occurrences of Q and R.
Common property types:
- Absence: P never occurs (within scope).
- Existence: P occurs at least once (within scope).
- Universality: P occurs throughout (within scope).
- Precedence: P must precede Q (within scope).
- Response: Q must follow P (within scope).
Informally: “Whenever a request is made, a response eventually follows.”
LTL: \(\mathbf{G}\,(\text{request} \Rightarrow \mathbf{F}\,\text{response})\)
Informally: “Between starting and stopping the motor, the safety switch must never activate.”
LTL: \(\mathbf{G}\,(\text{motorStart} \Rightarrow (\neg\text{safetySwitch}\ \mathbf{U}\ \text{motorStop}))\)
Chapter 7: Model Integration
7.1 Composition of Models
A complete requirements specification draws on several model types:
- Domain model (class diagram + OCL)
- Functional model (use cases + pre/post-conditions)
- Behavioural model (state machines + sequence diagrams)
- Constraint model (temporal logic)
These models must be mutually consistent. The class diagram defines the vocabulary (entity types and relationships) that all other models reference. State machines must reference states that make sense given the class diagram’s attribute types. Temporal logic formulas must predicate over events that appear in sequence diagrams. Pre/post-conditions must use attribute names that match the class diagram.
7.2 Consistency Checking
Inconsistencies among models are a major source of defects. Key checks include:
- Terminology consistency: every class, attribute, and event name must be used with the same meaning across all models.
- Structural consistency: multiplicity constraints in the class diagram must not be violated by object diagrams implicit in sequence diagrams.
- Behavioral consistency: state machine transitions must correspond to operations defined in the functional model.
- Constraint consistency: OCL invariants must be preserved by every operation’s post-condition.
7.3 Feature Interactions
A feature interaction occurs when two features, each correct in isolation, produce incorrect behavior when combined. Feature interactions are a fundamental problem in telecommunications systems (where they were first studied) but appear in any sufficiently rich system.
Detection strategies include:
- Exhaustive pairwise analysis (feasible only for small feature sets).
- Model checking against temporal logic properties.
- Scenario-based testing: constructing scenarios that activate multiple features simultaneously.
Chapter 8: Requirements Elicitation
8.1 Stakeholders
Stakeholder analysis identifies:
- Who has authority to approve requirements.
- Who has domain knowledge that must be extracted.
- Who will be affected by the system’s deployment.
- Whose interests conflict with one another.
A stakeholder register documents each stakeholder’s role, interests, concerns, and influence level. This prevents the common error of designing for the most vocal stakeholder while ignoring silent but affected parties.
8.2 Sources of Requirements
Requirements originate from multiple sources:
- Stakeholder interviews and workshops: primary source of functional requirements.
- Observation and ethnography: watching users work reveals tacit knowledge and informal practices not captured in job descriptions.
- Document analysis: existing system documentation, business process models, regulatory documents, and competitor products.
- Prototyping: rapid prototypes resolve ambiguity by giving stakeholders something concrete to react to.
- Domain knowledge: textbooks, standards, subject-matter experts.
- Analogous systems: existing systems solving related problems.
8.3 Elicitation Strategies
8.3.1 Interviews
Structured interviews follow a prepared question list; unstructured interviews are open-ended. Semi-structured interviews balance preparation with flexibility. Effective interview techniques:
- Ask “what happens when…?” rather than “do you need…?” to elicit scenarios.
- Ask about exceptions and error conditions, which users often omit.
- Ask about frequency and volume (how many transactions per day? what is the longest acceptable response time?).
- Verify understanding by restating what was heard.
8.3.2 Workshops
Joint Application Development (JAD) workshops bring all stakeholders together to resolve disagreements in real time. A trained facilitator structures the discussion. Workshops compress weeks of interviews into days but require significant stakeholder time.
8.3.3 Observation and Protocol Analysis
Observation captures actual work practices, including informal adaptations that are never written down. Protocol analysis asks users to think aloud while working, making tacit reasoning explicit.
8.3.4 Questionnaires and Surveys
Useful for reaching large stakeholder populations to validate findings from interviews. Poor for discovering unknown requirements because questions must be formulated before the survey is designed.
8.3.5 Use-Case Driven Elicitation
Start with a context diagram to identify actors, then ask: what goals does each actor pursue using the system? Each answer is a candidate use case. For each use case, elicit the main success scenario, then ask: what can go wrong? What alternative paths exist? This systematically drives toward complete coverage.
Chapter 9: Requirements Analysis
9.1 Requirements Triage and Prioritization
After elicitation, the requirements set is typically too large and too expensive to implement in full at once. Prioritization determines which requirements belong in which release or sprint.
9.1.1 MoSCoW Prioritization
The MoSCoW method classifies requirements into four categories:
- Must have: critical for the system to be viable; non-negotiable.
- Should have: important but not critical; included if possible.
- Could have: desirable; included only if time and budget permit.
- Won’t have (this time): explicitly deferred to a future release.
9.1.2 Analytic Hierarchy Process (AHP)
AHP, developed by Thomas Saaty, provides a mathematically grounded method for multi-criteria decision making under conflicting priorities. It is particularly useful when comparing requirements against multiple dimensions such as business value, implementation cost, and risk.
Step 1: Build a pairwise comparison matrix. For \(n\) requirements (or criteria), construct an \(n \times n\) matrix \(A\) where entry \(a_{ij}\) represents the importance of requirement \(i\) relative to requirement \(j\), on a scale from 1 (equally important) to 9 (requirement \(i\) is extremely more important). By definition, \(a_{ii} = 1\) and \(a_{ji} = 1/a_{ij}\).
For three requirements R1, R2, R3:
\[ A = \begin{pmatrix} 1 & 3 & 5 \\ 1/3 & 1 & 2 \\ 1/5 & 1/2 & 1 \end{pmatrix} \]Step 2: Compute the priority vector. The priority (weight) of each requirement is approximated by normalizing each column and averaging across rows. More precisely, the priority vector \(\mathbf{w}\) is the principal eigenvector of \(A\), normalized so that entries sum to 1.
Step 3: Check Consistency Ratio. Human judgments are rarely perfectly consistent. The Consistency Index (CI) measures deviation from perfect consistency:
\[ CI = \frac{\lambda_{\max} - n}{n - 1} \]where \(\lambda_{\max}\) is the principal eigenvalue of \(A\) and \(n\) is the matrix dimension. The Consistency Ratio compares CI to a Random Index (RI) derived empirically:
\[ CR = \frac{CI}{RI} \]Values of RI for small matrices: \(RI(1) = 0\), \(RI(2) = 0\), \(RI(3) = 0.58\), \(RI(4) = 0.90\), \(RI(5) = 1.12\).
A \(CR \leq 0.10\) is generally considered acceptable. If \(CR > 0.10\), the analyst should revisit the pairwise comparisons for inconsistencies.
9.2 Cost-Benefit Analysis
Each candidate requirement can be evaluated against:
- Business value (benefit if implemented)
- Penalty for omission (cost if not implemented — missed market, regulatory fines)
- Implementation cost (development effort and infrastructure)
- Implementation risk (technical uncertainty, integration complexity)
A simple figure of merit ranks requirements by:
\[ \text{priority} = \frac{\text{value} + \text{penalty}}{\text{cost} + \text{risk}} \]where all quantities are assessed on a common scale (e.g., 1–9). This gives a dimensionless ratio that supports ranking even when different stakeholders weight the factors differently.
9.3 Conflicts and Negotiation
Requirements from different stakeholders often conflict. Types of conflict:
- Data conflicts: stakeholders have inconsistent facts about the domain.
- Interest conflicts: different stakeholders have genuinely opposed goals.
- Value conflicts: disagreement on priorities or trade-offs.
Conflict resolution strategies:
- Authoritative: a designated authority (product owner, sponsor) makes the call.
- Negotiation: stakeholders bargain; typically produces a compromise.
- Win-win: reframe the problem so that both parties’ core interests are satisfied without compromise.
The requirements analyst’s role in conflict resolution is to surface the conflict explicitly, articulate each party’s interests and constraints clearly, and facilitate a structured resolution — not to impose a solution.
9.4 Cost Estimation
9.4.1 COCOMO II
The Constructive Cost Model (COCOMO), originally developed by Barry Boehm and updated as COCOMO II, estimates software development effort as a function of size (measured in lines of code or function points) and a set of cost drivers.
The basic COCOMO II effort equation is:
\[ E = A \cdot S^B \cdot \prod_{i=1}^{n} EM_i \]where:
- \(E\) is effort in person-months
- \(S\) is the software size in thousands of lines of code (KSLOC)
- \(A\) is a calibration constant (typically around 2.94 in post-architecture mode)
- \(B\) is a scale exponent derived from five scale factors (precedentedness, development flexibility, architecture/risk resolution, team cohesion, process maturity); typical values range from 1.01 to 1.26
- \(EM_i\) are effort multipliers for cost drivers such as product reliability, database size, platform volatility, analyst capability, and tool use
Duration is then estimated as:
\[ D = C \cdot E^F \]where \(C \approx 3.67\) and \(F \approx 0.28\) are calibration constants. The implied team size is \(E / D\).
9.4.2 Function Point Analysis
Function Points (FPs) provide a machine-independent size measure based on the number and complexity of:
- External inputs (transactions entering the system)
- External outputs (transactions leaving the system)
- External inquiries (input-output pairs with no persistent data change)
- Internal logical files (user-identifiable groups of data maintained by the system)
- External interface files (data maintained by another application used by this system)
Each is rated Low, Average, or High complexity, assigned a weight, and summed to produce Unadjusted Function Points (UFP). A Value Adjustment Factor (VAF) based on 14 general system characteristics modifies UFP to produce Adjusted Function Points.
FPs are language-independent: to convert to SLOC, multiply by a language-specific backfiring ratio (e.g., approximately 53 SLOC per FP for Java, 128 for C).
9.5 Risk Analysis
Risk analysis identifies threats to project success and quantifies their impact. A risk register records for each risk:
- Description
- Probability of occurrence (0–1)
- Impact if it occurs (e.g., days of delay or cost overrun)
- Risk exposure = probability × impact
- Mitigation strategy (avoid, reduce probability, reduce impact, accept, transfer)
Requirements-specific risks include: requirements volatility, undiscoverable tacit knowledge, unavailable key stakeholders, and conflicting regulatory constraints.
Chapter 10: Quality Requirements
10.1 Non-Functional Requirements
Non-functional requirements (NFRs) constrain how the system delivers its functions rather than what functions it delivers. They are sometimes called quality attributes or quality requirements.
10.2 ISO/IEC 25010 Quality Model
ISO 25010 (product quality model) organizes quality characteristics into eight top-level categories:
10.2.1 Functional Suitability
- Functional completeness: degree to which the set of functions covers all specified tasks and user objectives.
- Functional correctness: degree to which a product provides correct results with the needed degree of precision.
- Functional appropriateness: degree to which the functions facilitate the accomplishment of specified tasks and objectives.
10.2.2 Performance Efficiency
- Time behavior: response and processing times; throughput rates under stated conditions.
- Resource utilization: amounts and types of resources consumed (CPU, memory, network, disk).
- Capacity: degree to which the maximum limits of a product parameter meet requirements (e.g., maximum concurrent users, maximum database record count).
10.2.3 Compatibility
- Co-existence: ability to perform functions while sharing resources with other products.
- Interoperability: ability to exchange information and use exchanged information.
10.2.4 Usability
- Appropriateness recognizability: users can recognize whether the product is appropriate for their needs.
- Learnability: ease with which users can learn to use the product.
- Operability: ease with which users can operate and control the product.
- User error protection: degree to which the system protects users from making errors.
- Accessibility: ability to be used by people with disabilities.
10.2.5 Reliability
- Maturity: frequency of failures under normal operation.
- Availability: system is operational and accessible when required. Commonly specified as uptime percentage: 99.9% availability implies at most \(\approx 8.76\) hours downtime per year.
- Fault tolerance: ability to operate correctly despite hardware or software faults.
- Recoverability: ability to recover data and re-establish desired state after a failure.
10.2.6 Security
- Confidentiality: data is accessible only to those authorized.
- Integrity: the system prevents unauthorized modification.
- Non-repudiation: actions can be proved to have taken place.
- Accountability: actions of entities can be traced to those entities.
- Authenticity: the identity of a subject or resource can be proved.
10.2.7 Maintainability
- Modularity: composed of discrete components so that changes have minimal impact on other components.
- Reusability: assets can be used in more than one system.
- Analysability: effectiveness of diagnosing deficiencies or failures.
- Modifiability: degree to which the product can be modified without defects.
- Testability: effectiveness with which test criteria can be established and tests executed.
10.2.8 Portability
- Adaptability: can be effectively and efficiently adapted for different hardware, software, or usage environments.
- Installability: can be effectively installed or uninstalled.
- Replaceability: can replace another specified product for the same purpose.
10.3 Specifying Fitness Criteria
For each quality attribute, a fitness criterion (or fit criterion) is a testable statement of the form: “Given [measurement method], the observed [metric] shall be [comparison operator] [threshold] under [conditions].”
Chapter 11: Validation and Verification
11.1 Definitions
11.2 Requirements Review Techniques
11.2.1 Inspections (Fagan Inspections)
A formal inspection is a structured, document-driven review process:
- Planning: select the review team (moderator, author, readers, recorder); distribute materials.
- Overview: author presents the document context to reviewers.
- Individual preparation: each reviewer reads the document and logs potential defects.
- Inspection meeting: reviewers present defects; moderator drives through the document; recorder logs accepted defects.
- Rework: author corrects defects.
- Follow-up: moderator verifies rework is satisfactory.
Inspections consistently find 60–90% of defects present at the time of review. The investment in preparation pays off because defects caught at this stage are far cheaper to fix than those found later.
11.2.2 Walkthrough
A less formal alternative: the author guides the team through the document. Less structured than an inspection but faster and useful early in the process when the document is still rough.
11.2.3 Prototyping for Validation
A prototype — whether paper-based (storyboard), low-fidelity (wireframe), or executable — gives stakeholders something concrete to evaluate. Prototyping surfaces misunderstandings that formal review misses because stakeholders may not recognize what they are looking for until they see it.
11.3 Model-Based Analysis
Formal notations enable automated analysis:
- Type checking and syntax checking: ensure that models are well-formed.
- Satisfiability checking: OCL constraints can be analyzed for satisfiability using theorem provers or constraint solvers; an unsatisfiable constraint is a certain error.
- Model checking: temporal logic properties (LTL, CTL) can be checked against finite-state models using tools such as SPIN (LTL) or NuSMV (CTL). The model checker either confirms the property or produces a counterexample trace.
- Consistency checking: automated comparison of models to detect contradictions.
11.4 Requirements Testing
Every requirement should be traceable to at least one acceptance test, and every acceptance test should trace back to at least one requirement. This bidirectional traceability matrix supports:
- Completeness checking (requirements with no tests are probably undertested or untestable).
- Impact analysis (when a requirement changes, find all affected tests).
- Coverage analysis (tests with no corresponding requirement may be testing undocumented behavior).
Chapter 12: Requirements Engineering for AI Systems
12.1 Distinctive Challenges
Traditional requirements engineering assumes that system behavior can be completely specified and that the specification can be verified against the implementation. AI/ML systems challenge both assumptions:
- Emergent behavior: the behavior of an ML model arises from training data and optimization, not from hand-coded rules. The model may produce correct outputs for most inputs while failing unpredictably on others.
- Non-determinism: some AI systems (e.g., large language models with temperature > 0) produce different outputs for the same input on different runs.
- Training data dependency: model behavior depends on the training distribution; requirements must address distribution shift.
- Explainability: stakeholders may require that the system explain or justify its outputs, which is difficult to specify precisely.
12.2 Requirements Dimensions for AI Systems
Requirements for AI systems span several dimensions beyond those covered by traditional quality models:
- Accuracy requirements: specify the model’s performance on representative test sets (e.g., “The classifier shall achieve at least 95% recall on the validation set”).
- Fairness requirements: the system shall not discriminate on protected attributes. This requires operationalizing fairness (demographic parity, equalized odds, individual fairness) and specifying acceptable thresholds.
- Robustness requirements: specify behavior under distribution shift, adversarial inputs, and out-of-distribution data.
- Transparency and explainability requirements: specify what level of explanation is required for decisions (feature attribution, contrastive explanation, confidence score).
- Data requirements: specify quality, representativeness, currency, and provenance of training data.
- Human oversight requirements: specify the conditions under which a human must be consulted or must override the system.
12.3 Validation Strategies
Validation for AI systems includes:
- Dataset validation: audit training and test datasets for bias, coverage, and data quality.
- Behavioral testing: test against curated test suites that probe known failure modes.
- Red teaming: adversarial testing by a team whose goal is to find failures.
- A/B testing: deploy to a subset of users and measure real-world outcomes.
- Continuous monitoring: monitor production predictions for distribution shift and model degradation.
Chapter 13: Software Project Management
13.1 Role of Requirements in Project Management
The requirements specification is the foundation on which project plans are built. Every estimate, schedule, and resource allocation depends on knowing what is to be built. Consequently:
- Baseline the requirements document before creating the project schedule.
- Track requirements changes through a formal change control process.
- Assess the impact of every proposed change on schedule, cost, and quality before approving it.
13.2 The SRS Document
A Software Requirements Specification (SRS) is the primary deliverable of requirements engineering. The IEEE 830-1998 recommended practice describes a standard structure:
- Introduction: purpose, scope, definitions, acronyms, references, overview.
- Overall Description: product perspective, product functions, user characteristics, constraints, assumptions and dependencies.
- Specific Requirements: functional requirements (organized by use case, feature, or mode), external interface requirements (user interfaces, hardware, software, communications), performance requirements, design constraints, software quality attributes.
- Appendices: supporting information (data dictionaries, analysis models).
- Index.
Key properties of a high-quality SRS:
- Correct: every stated requirement is one that the software should meet.
- Unambiguous: every requirement has exactly one interpretation.
- Complete: includes all significant requirements — functional, performance, design, quality.
- Consistent: no pair of requirements conflicts.
- Ranked: requirements are prioritized by importance or stability.
- Verifiable: there exists a finite, cost-effective process for checking whether the software meets every requirement.
- Modifiable: the structure and organization allow changes to be made easily and consistently.
- Traceable: each requirement is referenced to its origin and to the design/code elements that implement it.
13.3 Deliverable Structure (Course Context)
The course structures the SRS as a series of incremental deliverables (D0–D6):
- D0: Problem description and initial context diagram.
- D1: Domain model (class diagram, key domain assumptions).
- D2: Functional model (use-case diagram and selected use-case descriptions).
- D3: Behavioural model (state machines and/or sequence diagrams for key use cases).
- D4: Non-functional requirements with fitness criteria.
- D5: Complete integrated SRS draft with consistency analysis.
- D6: Final SRS incorporating buddy-team feedback and instructor review.
Buddy-team feedback (structured peer review using a checklist) is conducted between D5 and D6, providing an early approximation of a Fagan inspection.
Appendix A: OCL Quick Reference
| Syntax | Meaning |
|---|---|
context ClassName inv invName: expr | Invariant on ClassName |
context ClassName::op(params) pre: expr | Pre-condition of op |
context ClassName::op(params) post: expr | Post-condition of op |
self | The contextual object |
attr@pre | Value of attr before operation |
obj.assocEnd | Navigate association |
col->size() | Size of collection |
| `col->select(v | cond)` |
| `col->collect(v | expr)` |
| `col->forAll(v | cond)` |
| `col->exists(v | cond)` |
col->isEmpty() | Empty test |
col->includes(obj) | Membership test |
col->sum() | Sum (numeric collections) |
Set{...}, Bag{...}, Sequence{...} | Collection literals |
Appendix B: UML State Machine Quick Reference
| Element | Notation | Description |
|---|---|---|
| State | Rounded rectangle | A situation in which an object resides |
| Initial pseudostate | Filled circle | Starting point of the state machine |
| Final state | Bull’s-eye (circle in circle) | Terminal state |
| Transition | Labeled arrow | event [guard] / action |
| Entry action | entry / action inside state | Executed on every entry to this state |
| Exit action | exit / action inside state | Executed on every exit from this state |
| Do-activity | do / activity inside state | Ongoing while in state; interrupted on exit |
| Composite state | State containing other states | Supports hierarchy and history |
| Shallow history | H inside circle | Remembers most recent direct substate |
| Deep history | H* inside circle | Remembers most recent nested configuration |
| Orthogonal regions | Dashed dividing line inside state | Concurrent sub-state machines |
| Fork pseudostate | Filled bar with one incoming, multiple outgoing | Enters multiple regions simultaneously |
| Join pseudostate | Filled bar with multiple incoming, one outgoing | Waits for all regions before continuing |
Appendix C: AHP Worked Example
Suppose we have three requirements: R1 (user authentication), R2 (real-time notifications), R3 (data export). A product manager provides the following judgments:
- R1 is moderately more important than R2: \(a_{12} = 3\)
- R1 is strongly more important than R3: \(a_{13} = 5\)
- R2 is weakly more important than R3: \(a_{23} = 2\)
The pairwise comparison matrix:
\[ A = \begin{pmatrix} 1 & 3 & 5 \\ 1/3 & 1 & 2 \\ 1/5 & 1/2 & 1 \end{pmatrix} \]Column sums: \(1 + 1/3 + 1/5 = 23/15 \approx 1.533\), \(3 + 1 + 1/2 = 4.5\), \(5 + 2 + 1 = 8\).
Normalize each column:
\[ A_{\text{norm}} \approx \begin{pmatrix} 0.652 & 0.667 & 0.625 \\ 0.217 & 0.222 & 0.250 \\ 0.130 & 0.111 & 0.125 \end{pmatrix} \]Row averages (priority vector):
\[ \mathbf{w} \approx \begin{pmatrix} 0.648 \\ 0.230 \\ 0.122 \end{pmatrix} \]Thus R1 accounts for about 65% of priority, R2 about 23%, and R3 about 12%.
Consistency check: Compute \(A\mathbf{w}\):
\[ A\mathbf{w} \approx \begin{pmatrix} 1 \cdot 0.648 + 3 \cdot 0.230 + 5 \cdot 0.122 \\ 0.333 \cdot 0.648 + 1 \cdot 0.230 + 2 \cdot 0.122 \\ 0.2 \cdot 0.648 + 0.5 \cdot 0.230 + 1 \cdot 0.122 \end{pmatrix} = \begin{pmatrix} 1.948 \\ 0.690 \\ 0.367 \end{pmatrix} \]\(\lambda_{\max} = \frac{1.948/0.648 + 0.690/0.230 + 0.367/0.122}{3} \approx \frac{3.006 + 3.000 + 3.008}{3} \approx 3.005\)
\[ CI = \frac{3.005 - 3}{3 - 1} = \frac{0.005}{2} = 0.0025 \]\[ CR = \frac{0.0025}{0.58} \approx 0.004 \]Since \(CR \approx 0.004 \ll 0.10\), the judgments are highly consistent and the priority vector is reliable.