CS 445: Software Requirements Specification and Analysis
Byron Weber Becker
Estimated study time: 34 minutes
Table of contents
Sources and References
Primary textbook — Karl Wiegers and Joy Beatty, Software Requirements, 3rd ed., Microsoft Press, 2013. Supplementary texts — Steve McConnell, Rapid Development: Taming Wild Software Schedules, Microsoft Press, 1996; Alistair Cockburn, Writing Effective Use Cases, Addison-Wesley, 2001; Eric Ries, The Lean Startup, Crown Business, 2011. Online resources — IEEE 830 Software Requirements Specification standard; ACM SIGSOFT requirements engineering resources; Roman Pichler’s agile product management blog; Stanford HCI group user research methods.
Chapter 1: Foundations of Requirements Engineering
Requirements engineering sits at the boundary between human needs and executable code. Before any design decision is made, before a line is written, someone must answer a deceptively hard question: what should this software actually do? This chapter establishes vocabulary, exposes why that question is chronically answered badly, and frames the engineering process that addresses it.
What is a Software Requirement?
A requirement is a statement of what a system must do or a quality it must have, independent of how it achieves that. The IEEE defines it as “a condition or capability needed by a user to solve a problem or achieve an objective.” Three orthogonal types exist:
| Type | Definition | Example |
|---|---|---|
| Functional | Observable behaviour — inputs, outputs, state transitions | “The system shall allow a registered user to reset their password via email.” |
| Non-functional (Quality Attribute) | How well the system performs its functions | “Password reset emails shall be delivered within 30 seconds under normal load.” |
| Constraint | External conditions the solution must satisfy | “The system shall be implemented in Java 17 and deploy on AWS.” |
The distinction matters because functional and non-functional requirements demand different elicitation, analysis, and testing strategies.
Why Software Projects Fail
The Standish Group’s CHAOS Report, tracking thousands of projects across multiple decades, consistently identifies requirements problems as the leading cause of failure: incomplete requirements, changing requirements, and lack of user involvement collectively account for over half of cancellations and overruns. The root cause is not that stakeholders are irrational — it is that software is unusually abstract and stakeholders cannot fully anticipate what they want until they see something working.
A crucial insight is the cost of late defect discovery. Empirical data from Barry Boehm and others shows that fixing a requirements defect discovered during testing costs roughly 10–100 times more than catching it during requirements analysis. This asymmetry justifies investing heavily at the front of the lifecycle.
Requirements vs. Specifications vs. Design
These three concepts are often conflated:
- Requirements capture what the system must do from the stakeholder’s perspective — the problem space.
- Specifications translate requirements into precise, verifiable statements — the boundary between problem and solution.
- Design describes how to build the system — the solution space.
A good requirements statement avoids implementation detail. “Users shall be able to find any product within three navigation steps” is a requirement. “The homepage shall contain a search bar that queries a PostgreSQL database” mixes requirement with design.
The Requirements Engineering Process
Requirements engineering is iterative, not a one-time waterfall activity:
- Elicitation — Discover what stakeholders need through interviews, observation, workshops, and analysis.
- Analysis — Resolve conflicts, prioritize, model, and assess feasibility.
- Specification — Document requirements in a form useful to developers and testers.
- Validation — Confirm that specifications accurately represent stakeholder needs.
- Management — Track, version, and manage change throughout the project lifecycle.
Plan-Driven vs. Agile Contexts
In plan-driven development (waterfall, V-model), requirements are elaborated extensively upfront into an IEEE 830 Software Requirements Specification document before design begins. This suits stable, safety-critical, or contractually fixed projects.
In agile contexts (Scrum, XP), requirements live as a product backlog: a ranked list of user stories grouped into epics. Stories are elaborated just-in-time as they near the top of the backlog. Sprint planning converts backlog items into sprint commitments. Neither approach eliminates requirements work — agile simply distributes it across time rather than front-loading it.
Stakeholders: Who They Are and Why They Matter
A stakeholder is anyone with a legitimate interest in the system. Ignoring even one class of stakeholder reliably produces surprises. Common categories include direct users, managers who direct users, system administrators, customers who pay, regulators who audit, and the development team itself. The voice of the customer principle holds that requirements should reflect the needs of those who use the system, not just those who pay for it — a distinction that matters when buyers and users are different people.
Chapter 2: Problem Fit and Business Requirements
Before deciding what features to build, a team must answer a more fundamental question: is there actually a problem worth solving? Premature commitment to a solution — the most common failure mode in product development — bypasses this question entirely. This chapter equips teams with tools to validate the problem before writing a single line of code.
The Lean Canvas Model
The Lean Canvas, adapted by Ash Maurya from Alexander Osterwalder’s Business Model Canvas, is a one-page framework that forces explicit articulation of every key assumption in a business or product idea. Its nine blocks are:
| Block | Key Question |
|---|---|
| Problem | What are the top 3 problems customers face? |
| Customer Segments | Who has this problem? Who are the early adopters? |
| Unique Value Proposition | Why should customers pick this solution? |
| Solution | What are the top 3 features addressing the problem? |
| Channels | How will customers find and use the product? |
| Revenue Streams | How does the product make money? |
| Cost Structure | What are the main costs to build and operate it? |
| Key Metrics | How will progress and success be measured? |
| Unfair Advantage | What can competitors not easily replicate? |
Each block is an assumption until validated by evidence. The canvas does not prescribe answers — it makes disagreements explicit and creates a shared starting point for the team.
Business Requirements
Business requirements express the organization’s objectives and the value the system must deliver at an executive level. They answer “why are we building this at all?” and typically include:
- Vision statement — the future state the organization aims to reach
- Business objectives — specific, measurable targets (increase retention by 15%, reduce processing time by 40%)
- Scope boundaries — what is in and out of the project
Business requirements cascade: every feature requirement should trace upward to at least one business objective. Requirements that trace to no objective are prime candidates for the cutting-room floor.
Hypothesis-Driven Development
Eric Ries’s Lean Startup framework reframes product development as a scientific process. Every assumption about customers, problems, and solutions is a hypothesis that can be confirmed or refuted by evidence. The Build-Measure-Learn loop:
\[ \text{Idea} \rightarrow \text{Build (MVP)} \rightarrow \text{Measure (Metrics)} \rightarrow \text{Learn (Pivot or Persevere)} \]A Minimum Viable Product (MVP) is the smallest experiment that tests the riskiest hypothesis. It need not be software — a landing page, a concierge service, or a wizard-of-oz prototype can validate a hypothesis cheaply before engineering investment.
Problem Interviews
The most common way teams waste effort is building a solution to a problem that does not exist, or that customers tolerate rather than urgently need. Problem interviews are structured conversations designed to understand customer pain before proposing any solution. Key principles:
- Prepare a script with open-ended questions about current workflows and pain points
- Avoid leading questions — “Do you find X frustrating?” biases toward confirmation
- Listen for emotion — frequency of complaints matters less than intensity
- Do not pitch — the goal is discovery, not selling
- Laddering technique — ask “why” repeatedly to move from surface complaint to root cause
A well-run series of 15–20 problem interviews often invalidates the original idea and reveals a more tractable adjacent problem.
Customer Segmentation and Jobs-To-Be-Done
Not all users are equal. Customer segmentation identifies distinct groups with different needs and willingness to pay. Early adopters — customers with an acute pain and low tolerance for the status quo — are the highest-value targets for initial validation.
Clayton Christensen’s Jobs-To-Be-Done (JTBD) framework reframes customers not as demographic categories but as agents trying to make progress in a particular situation: “When I am [situation], I want to [motivation], so I can [expected outcome].” JTBD surfaces the deeper need that demographic segmentation misses.
Competitive Analysis and Success Metrics
Competitive analysis maps existing solutions to the customer’s problem, identifying their strengths, weaknesses, and the gaps that represent opportunity. The goal is not to copy competitors but to understand why a customer would switch.
Success metrics operationalize business objectives. OKRs (Objectives and Key Results) pair a qualitative objective with 2–5 quantitative key results:
- Objective: Reduce customer support burden
- Key Result 1: Self-service resolution rate rises from 30% to 55% by Q3
- Key Result 2: Mean time to resolution drops from 48 h to 12 h
Avoid vanity metrics — page views, downloads, registered users — that rise without indicating real value delivery. Prefer metrics that change only if customers are succeeding.
Chapter 3: Stakeholder Analysis and Elicitation
Knowing what to build requires talking to the right people in the right way. Requirements elicitation is not simply collecting a list of features from whoever is loudest — it is a disciplined, ethical inquiry into human needs, contexts, and goals. This chapter covers who to talk to, how to find them, and the techniques that surface requirements a direct interview alone would miss.
Stakeholder Identification
A stakeholder register is a living document listing every stakeholder class, their interest in the system, their influence level, and the elicitation approach appropriate for them. Typical categories include:
| Category | Description |
|---|---|
| Direct users | People who interact with the system daily |
| Indirect users / beneficiaries | People who benefit from outputs without operating the system |
| Product owner / sponsor | Decision-maker who funds and prioritizes |
| Subject-matter experts | Domain knowledge holders (accountants, clinicians, etc.) |
| System administrators | Those who install, configure, and maintain the system |
| Regulators | Compliance and audit authorities (GDPR, HIPAA, etc.) |
| Interfacing systems / vendors | External systems the software must integrate with |
| Negative stakeholders | Those who might oppose or be harmed by the system |
| Development team | Architects, developers, testers with technical constraints |
| Customer support | Front-line staff who handle user problems |
Missing any category creates blind spots that surface as expensive surprises during testing or deployment.
Elicitation Techniques
No single technique surfaces all requirements. A mature requirements engineer combines several:
- Interviews — one-on-one or small group; best for nuanced, personal, or sensitive information
- Workshops (JAD sessions) — structured group sessions to build shared understanding and resolve conflict quickly
- Observation / contextual inquiry — watching users in their natural environment reveals tacit knowledge they cannot articulate
- Questionnaires — efficient for large populations; poor for discovering unknown unknowns
- Document analysis — existing forms, manuals, and reports reveal implicit requirements
- Prototyping — a lo-fi mockup acts as a concrete communication medium; stakeholders react more precisely to something they can see than to abstract questions
- Focus groups — exploratory; useful for attitude and preference data but susceptible to groupthink
Requirements Interviews in Depth
Good interviewing is a skill developed through practice. Structure matters:
- Open questions first — “Walk me through a typical day” before “Do you need feature X?”
- Laddering — follow any stated need with “Why is that important to you?” to reach the underlying goal
- Silence is productive — resist filling pauses; interviewees often reveal more when given space
- Active listening — paraphrase back to confirm understanding and signal respect
Interview ethics (governed by TCPS2 at Canadian universities) requires: informed consent before recording, right to withdraw, anonymity if promised, and honest disclosure of how data will be used. Ethical violations damage trust and often invalidate data.
Personas
A persona is a fictional but evidence-grounded archetype representing a class of users. It synthesizes qualitative research into a vivid, memorable profile:
- Name, age, job title, and background
- Goals and motivations (what success looks like for them)
- Pain points and frustrations with current solutions
- Technical proficiency and tool preferences
- A representative quote that captures their mindset
Primary personas represent the main design target. Secondary personas must be accommodated without degrading the primary experience. Anti-personas define explicitly who the system is not for, preventing scope creep from edge cases.
Personas prevent the abstraction trap: designers and developers instinctively optimize for themselves unless reminded, concretely, who actually uses the system.
Contextual Inquiry
Contextual inquiry (Beyer and Holtzblatt) applies an apprentice-master relationship: the researcher observes the user performing real work and asks clarifying questions in context. This surfaces tacit knowledge — the routines, workarounds, and informal practices that users cannot describe in an abstract interview because they have never needed to.
Workshop Techniques
JAD (Joint Application Development) sessions bring sponsors, users, and developers into a structured multi-day workshop facilitated by a neutral moderator. Requirements are elicited, clarified, and baselined in real time, dramatically reducing the back-and-forth typical of sequential interviews.
Affinity mapping organizes large volumes of user observations into themes by writing each observation on a sticky note and clustering related notes. The clusters reveal patterns invisible in individual data points.
Brain-writing (silent brainstorming on paper) reduces groupthink by separating idea generation from evaluation: participants write ideas individually before sharing, preventing the first loud voice from anchoring the group.
Chapter 4: User Requirements — Use Cases and User Stories
Once stakeholders are understood, their needs must be captured in a form developers can build from and testers can verify against. Use cases and user stories are the two dominant paradigms. They are not alternatives — each has strengths, and the choice depends on project scale, formality, and team convention.
Use Cases
A use case describes a discrete interaction between an actor and the system that produces a result of value. Alistair Cockburn’s canonical template includes:
- Goal level — cloud (business goal), kite (user goal), sea (subfunction), fish (internal step), clam (too low)
- Primary actor — who initiates the use case
- Stakeholders and interests — who else is affected and why they care
- Preconditions — what must be true before the use case can begin
- Main success scenario — the happy path, numbered steps
- Extensions — alternate flows, errors, exceptions, each anchored to a main-scenario step
- Postconditions — guaranteed state upon successful completion
Extensions are where most real complexity lives. A use case with no extensions is probably incomplete.
Use Case Diagrams (UML)
UML use case diagrams show actors and use cases as an overview of system scope, not detailed behaviour. Key relationships:
- Include — one use case always invokes another (shared behaviour extracted for reuse)
- Extend — one use case optionally augments another at defined extension points
- Generalize — a child actor or use case inherits and specializes a parent
Use case diagrams are most useful for communicating scope to non-technical stakeholders. They are poor substitutes for textual use case descriptions — the diagram tells you what interactions exist, not how they unfold.
User Stories
User stories originate in Extreme Programming and are the dominant requirements format in Scrum. The canonical form is:
As a [role] I want [goal] so that [benefit].
The “so that” clause is often omitted but is the most important part — it anchors the story to user value and prevents teams from building technically correct but useless features.
INVEST criteria define a well-formed story:
| Letter | Criterion | Meaning |
|---|---|---|
| I | Independent | Can be developed and delivered without dependency on another story |
| N | Negotiable | Scope details are open to discussion; the card is a conversation starter |
| V | Valuable | Delivers value to a user or customer |
| E | Estimable | Team can estimate the effort with reasonable confidence |
| S | Small | Completable within a single sprint |
| T | Testable | Acceptance criteria can be written |
Acceptance criteria in Given-When-Then form make testability explicit:
Given a logged-in user on the checkout page, When they click “Place Order” with a valid cart, Then the order is confirmed and a confirmation email is sent within 60 seconds.
Epics and Story Mapping
An epic is a large user story too broad to fit in a single sprint. Epics are decomposed into stories before entering a sprint. User story mapping (Jeff Patton) arranges stories horizontally by user journey phase (discover, search, buy, receive) and vertically by priority, creating a visual product roadmap. Slicing horizontally across the map — taking one story from each journey phase — delivers a thin, end-to-end walking skeleton that can be demonstrated and validated early.
Brainstorming for Requirements
Brainstorming sessions for requirements follow the classic rules: defer judgment, maximize quantity, encourage wild ideas, and build on others. An effective session concludes with affinity sorting (grouping by theme) and dot voting (each participant allocates a fixed budget of votes to prioritize items). The facilitator’s job is to prevent premature convergence and ensure every participant’s voice reaches the wall.
Chapter 5: Models I — Lightweight and Structural Models
Textual requirements alone are insufficient for complex systems. Models compress information, expose relationships, and make inconsistencies visible that prose obscures. Lightweight models — workflow diagrams, quality attribute scenarios, context diagrams — are fast to produce and immediately useful without demanding formal notation expertise.
Workflow and Business Process Models
BPMN (Business Process Model and Notation) swimlane diagrams model the current or desired business process as a sequence of activities distributed across organizational roles (swimlanes). Each lane represents one actor or system. Key elements:
- Events (circles) — start, end, and intermediate triggers
- Tasks (rounded rectangles) — atomic units of work
- Gateways (diamonds) — branching (exclusive OR, parallel AND, inclusive OR)
- Sequence flows (solid arrows) — control flow within a pool
- Message flows (dashed arrows) — communication between pools
Swimlane diagrams expose hand-offs (the most error-prone points in any process), bottlenecks, and decision logic that stakeholders often leave implicit in interviews.
Quality Attributes (Non-Functional Requirements)
Quality attributes describe the system’s operational characteristics. The ISO 25010 product quality model organizes them into eight top-level characteristics:
| Characteristic | Representative Sub-characteristics |
|---|---|
| Functional suitability | Correctness, completeness, appropriateness |
| Performance efficiency | Time behaviour, resource utilisation, capacity |
| Compatibility | Co-existence, interoperability |
| Usability | Learnability, operability, accessibility |
| Reliability | Maturity, fault tolerance, recoverability |
| Security | Confidentiality, integrity, authenticity |
| Maintainability | Modularity, reusability, analysability, modifiability |
| Portability | Adaptability, installability, replaceability |
Quality attributes are notoriously difficult to specify well. Vague statements like “the system shall be fast” are not requirements — they are wishes. The NFR framework requires quantitative thresholds anchored to realistic scenarios.
Quality Attribute Scenarios
A quality attribute scenario specifies a measurable requirement in stimulus-response form:
- Source — who or what triggers the stimulus
- Stimulus — the specific event or condition
- Environment — the system state at the time of stimulus (normal load, degraded mode, etc.)
- Artifact — the part of the system affected
- Response — the system’s observable reaction
- Response measure — a quantitative threshold (latency ≤ 200 ms at the 95th percentile under 500 concurrent users)
Without the response measure, the scenario is a description, not a requirement.
Context Diagrams and Data Dictionaries
A context diagram (Level 0 DFD) treats the system as a black box and shows only the external entities that interact with it and the data flows across the system boundary. It is the fastest way to establish and communicate scope: anything inside the boundary is the system’s responsibility; anything outside is an external constraint.
A data dictionary defines every data element appearing in requirements, models, and interfaces: name, type, format, allowable values, and owner. It is the single source of truth for shared vocabulary, preventing the synonyms and homonyms that cause integration bugs.
Chapter 6: Requirements Analysis — Prioritization and Risk
Having discovered requirements, a team faces a universal constraint: more has been identified than can be built. Analysis is the discipline of deciding what matters most, catching contradictions early, and understanding where the project is most likely to go wrong.
Requirements Prioritization
Prioritization aligns the backlog with business value under resource constraints. Several frameworks exist:
MoSCoW categorizes each requirement:
- Must — the release is a failure without this
- Should — high value, to be included if at all possible
- Could — desirable but not critical; first to be deferred
- Won’t (this time) — explicitly deferred, not forgotten
Kano Model distinguishes requirement types by their effect on satisfaction:
- Basic needs (dissatisfiers) — expected; their absence causes dissatisfaction but their presence goes unnoticed
- Performance needs (satisfiers) — more is better; proportional satisfaction
- Excitement needs (delighters) — unexpected; their presence creates delight, their absence is not missed
Understanding the Kano type helps teams invest: basic needs must reach a threshold, delighters can be minimal but impactful.
Weighted criteria matrices score requirements against multiple criteria (business value, technical risk, cost, strategic fit) with weights assigned by stakeholders, producing a ranked list less susceptible to the loudest voice in the room.
Scope Management
Product scope is the set of features and behaviours the system will have. Project scope is the work required to deliver it. Both must be explicitly bounded; growth in either without a corresponding plan adjustment is scope creep.
Requirements triage applies medical metaphor: some requirements are critical, some can wait, some should never be built. Deferring to the backlog is not failure — it is discipline.
Risk Analysis
A risk is an uncertain event that, if it occurs, would affect project or product outcomes. Risk management in requirements engineering addresses:
\[ \text{Risk Exposure} = P(\text{occurrence}) \times \text{Impact} \]A risk register records each risk, its probability, impact, trigger conditions, mitigation strategy, and owner. Requirements-specific risks include:
| Risk Type | Description |
|---|---|
| Ambiguity | Requirements open to multiple interpretations |
| Volatility | Requirements likely to change during development |
| Conflict | Stakeholders hold incompatible needs |
| Infeasibility | Requirements not technically or economically achievable |
| Missing requirements | Stakeholder needs not yet surfaced |
Early risk identification informs elicitation priority: high-risk areas deserve deeper investigation before commitment.
Requirements Traceability
Traceability is the ability to follow a requirement through the lifecycle: from stakeholder goal → requirement → design element → test case, and back. A traceability matrix maps these links. Forward traceability supports impact analysis (if a requirement changes, what design and test artefacts must change?). Backward traceability supports coverage analysis (is every test case justified by a requirement?).
Without traceability, requirements and tests gradually drift apart, and gold-plating (building features that trace to no requirement) silently consumes project budget.
Chapter 7: Elaboration — Interface Specifications and Domain Models
Analysis produces a prioritized, consistent set of requirements. Elaboration adds the precision developers need: exactly what data crosses system boundaries, exactly what domain concepts the software must represent, and exactly what rules constrain behaviour.
Interface Phenomena and Specifications
Every system has interfaces — boundaries across which information flows. Interface types include:
- User interfaces — screens, forms, reports, notifications
- Application Programming Interfaces (APIs) — method signatures, REST endpoints, data formats, error codes
- Hardware interfaces — sensors, actuators, communication protocols
- Communication interfaces — network protocols, message formats, encryption requirements
An interface specification for an API endpoint should include: HTTP method and URL pattern, request headers and body schema, response body schema for each status code, authentication mechanism, rate limits, and error catalogue. Ambiguity here translates directly into integration bugs discovered late.
Scenario Elaboration
A use case’s main success scenario is a skeleton. Scenario elaboration fills in the extensions: what happens when the network is unavailable? When the user enters invalid data? When two users attempt the same action concurrently?
Business rules are declarative constraints on system behaviour, distinct from the procedural logic in scenarios:
| Rule Type | Example |
|---|---|
| Computation rule | “Shipping cost = 0.05 × order weight (kg) + 2.00” |
| Constraint rule | “A user may not place an order exceeding their credit limit” |
| Action enabler | “If balance < $0, suspend withdrawals” |
| Inference rule | “If customer has placed > 10 orders, classify as ’loyal'” |
Business rules belong in a rule catalogue, not buried in scenarios, because they change independently of the processes that enforce them.
Domain Models
A domain model is a UML class diagram used as a conceptual vocabulary, not a database or object design. It identifies the key entities in the problem domain, their attributes, and the associations between them with multiplicities. For example, an e-commerce domain model might show that one Customer places zero-to-many Orders, each Order contains one-to-many OrderLines, and each OrderLine references exactly one Product.
Domain models prevent requirements from using the same word to mean different things, or different words to mean the same thing. Shared domain vocabulary — ubiquitous language in Domain-Driven Design — is a prerequisite for consistent requirements.
Specification Quality Criteria
A well-formed requirement must be:
| Criterion | Meaning |
|---|---|
| Correct | Accurately represents a stakeholder need |
| Complete | Covers all conditions (inputs, outputs, errors) |
| Consistent | Does not contradict another requirement |
| Unambiguous | Has exactly one interpretation |
| Verifiable | A test can determine whether it is satisfied |
| Feasible | Achievable within technical and budget constraints |
| Necessary | Traces to a real stakeholder need or business objective |
| Traceable | Can be linked to its source and to design/test artefacts |
Reviewing requirements against this checklist before baselineing is one of the highest-return activities in requirements engineering.
Chapter 8: Prototyping and Behaviour Modelling
Some requirements cannot be precisely specified by analysis alone — they must be discovered through stakeholder reaction to a concrete artefact. Prototyping compresses feedback cycles by making ideas tangible before committing engineering effort. Behaviour modelling complements this by capturing time-dependent and event-driven aspects of requirements that static models cannot represent.
Prototyping
The purpose of a prototype is to learn, not to ship. Two orthogonal dimensions define prototype types:
- Horizontal vs. vertical — horizontal covers breadth (many features, no depth); vertical covers depth (one feature end-to-end, fully functional)
- Throwaway vs. evolutionary — throwaway prototypes are discarded after learning; evolutionary prototypes are refined into the final product
The risk of evolutionary prototyping is that code built for speed of feedback accumulates technical debt that is never repaid. Throwaway prototypes are safer for requirements purposes.
UI Sketches and Wireframes
Paper prototyping — drawing screens on paper and simulating interactions manually — is the fastest feedback mechanism available. Participants are far more willing to critique a sketch than polished software, enabling frank feedback. Lo-fi wireframes add fidelity without committing to visual design; hi-fi mockups approach final appearance and risk anchoring stakeholder attention on aesthetics rather than functionality.
A key principle: avoid premature aesthetics. Font choices and colour palettes are irrelevant when the interaction model is still uncertain.
Solution-Fit Hypothesis
After problem interviews validate that a problem exists, the proposed solution is itself a hypothesis. Two lightweight validation approaches before engineering:
- Concierge MVP — a human manually performs the service the software will eventually automate, to validate that customers want the outcome before automating delivery
- Wizard-of-Oz prototype — a human operates the system “behind the curtain” while users interact with a realistic front-end interface, simulating intelligent behaviour that does not yet exist
Both approaches validate demand at near-zero engineering cost.
Navigation Maps
A navigation map (screen flow diagram) models all screens or pages in a user interface and the transitions between them triggered by user actions. It exposes dead ends (screens with no forward navigation), unreachable states (screens that no navigation path reaches), and overly deep paths (interactions requiring too many steps). Navigation maps are requirements artefacts, not design artefacts — they specify the interaction structure the system must support.
State Machine Models
Finite state machines (FSMs) model systems where behaviour depends on history, not just current input. A UML statechart includes:
- States — distinct modes of the system (Idle, Authenticated, Locked, Processing)
- Transitions — arrows labelled with
event [guard] / action - Guards — boolean conditions that must hold for a transition to fire
- Actions — behaviours executed on entry, exit, or during a transition
State machines are appropriate for reactive and event-driven requirements: ATM workflows, login/lockout logic, traffic light controllers, order lifecycle management. A worked example — a login flow with states Unauthenticated, Authenticated, and LockedOut and transitions triggered by valid credentials, invalid credentials, and timeout — makes it clear when transitions are permitted and what the system does in each state.
\[ \delta : S \times \Sigma \rightarrow S \]where \( S \) is the set of states, \( \Sigma \) is the input alphabet (events), and \( \delta \) is the transition function. In practice, UML statecharts extend this with hierarchy, concurrency, and actions.
Behaviour-Driven Development (BDD)
In agile contexts, BDD (Behaviour-Driven Development) connects requirements to automated tests by writing specifications in Gherkin — structured natural language that tools like Cucumber can parse into executable tests. The Given-When-Then format used in acceptance criteria maps directly to Gherkin scenarios:
Scenario: Successful login
Given a registered user with username "alice" and correct password
When they submit the login form
Then they are redirected to the dashboard
And a welcome message displays their first name
BDD blurs the boundary between specification and test, ensuring that requirements are validated continuously rather than reviewed once and forgotten.
Chapter 9: The Software Requirements Specification and Cost Estimation
All requirements work culminates in two deliverables: a document (or artefact set) that communicates what must be built, and an estimate of what it will cost. This chapter covers the structure and quality of the SRS, and the quantitative models used to estimate effort.
The Software Requirements Specification (SRS)
The IEEE 830 / ISO 29148 SRS standard defines the canonical structure:
- Introduction — purpose, scope, definitions/acronyms, references, overview
- Overall description — product perspective, product functions summary, user characteristics, constraints, assumptions
- Specific requirements — functional requirements (organized by feature, use case, or stimulus-response); quality attribute requirements; interface requirements; design constraints
- Appendices — data dictionary, TBD list, index
The SRS represents a contract between stakeholders and developers: it is the basis for design, test planning, and change management. Formal SRS documents suit large, safety-critical, or externally contracted systems. Agile teams with co-located stakeholders may achieve equivalent communication through a maintained backlog, personas, and living documentation.
Writing Good Requirements
Well-written requirements are precise without being over-constrained. Common guidelines:
- Use active voice and identify the subject: “The system shall…” not “It shall be possible to…”
- One requirement per statement — compound requirements with “and” obscure scope and make testing ambiguous
- Be specific and measurable: “within 3 seconds” not “quickly”
- Avoid implementation detail in requirements that belong in design
- Use agreed-upon terminology from the domain model and data dictionary
Requirements smells signal likely problems:
| Smell | Example | Problem |
|---|---|---|
| AND-requirement | “The system shall store and display user profiles” | Two requirements disguised as one |
| Subjective wording | “The UI shall be intuitive” | Not verifiable |
| Unmeasurable quality | “The system shall be reliable” | No threshold given |
| Implied design | “The system shall use a relational database” | Constraint smuggled as requirement |
| Passive voice ambiguity | “Errors shall be logged” | Who logs? When? Where? |
Requirements Reviews and Inspections
Fagan Inspection is a formal defect detection process: a trained moderator leads a team through a requirements document using a prepared checklist, each reviewer having pre-read the material. Roles include author, moderator, reader, and recorder. Defects are classified by type (ambiguity, missing, incorrect, inconsistent) and severity, then tracked to resolution.
Structured walkthroughs are less formal: the author presents the document to a small group who raise questions and concerns in real time. Walkthroughs are faster than inspections but detect fewer defects per hour of review time.
Both methods are empirically among the most cost-effective defect removal activities available.
Requirements Validation
Validation confirms that specifications represent real stakeholder needs:
- Prototyping — stakeholders react to a concrete artefact, surfacing misunderstandings
- Test-case derivation — if a requirement cannot generate a test case, it is likely incomplete or ambiguous
- Model-checking — formal methods tools verify consistency and completeness of formal specifications
- Stakeholder sign-off — structured review and approval; in regulated industries, this is a contractual milestone
Cost Estimation
Function Point Analysis
Function point analysis (FPA) estimates system size by counting logical units of functionality independent of implementation language:
| Component | Description |
|---|---|
| External Inputs (EI) | User data inputs to the system |
| External Outputs (EO) | Data outputs from the system |
| External Queries (EQ) | Input-output pairs with no internal data update |
| Internal Logical Files (ILF) | Data maintained by the system |
| External Interface Files (EIF) | Data maintained by other systems, referenced by this one |
Each component is rated Low/Average/High complexity and assigned a weight. Raw function points are adjusted by a Value Adjustment Factor based on 14 general system characteristics. Function points convert to lines of code via a language-specific gearing factor, enabling COCOMO estimation.
COCOMO II
The Constructive Cost Model (COCOMO II) estimates effort in person-months from project size in KSLOC (thousands of source lines of code) or function points:
\[ \text{Effort} = A \cdot (\text{Size})^{E} \cdot \prod_{i=1}^{n} EM_i \]where \( A \) is a calibration constant, \( E \) is a scaling exponent derived from five scale factors (precedentedness, development flexibility, architecture/risk resolution, team cohesion, process maturity), and \( EM_i \) are 17 effort multipliers covering product, platform, personnel, and project factors.
COCOMO II is a parametric model — its accuracy depends on calibration to organizational historical data. Used naively with default parameters, estimates can be off by a factor of two or more.
Agile Estimation
Agile teams estimate in story points — a relative unit of effort that captures size, complexity, and uncertainty together. Planning poker is the standard technique: team members simultaneously reveal their estimates (using a Fibonacci-scaled deck: 1, 2, 3, 5, 8, 13, 20, …), discuss outliers, and converge. Over several sprints, a team’s average story points per sprint establishes velocity, enabling release forecasting:
\[ \text{Sprints to release} = \frac{\text{Total story points remaining}}{\text{Velocity}} \]Story point estimates are meaningful only relative to the team that produces them — they cannot be compared across teams or used to justify headcount.
The Cost of Requirement Changes
McConnell’s data (from Rapid Development and related studies) quantifies the escalating cost of requirement changes over the project lifecycle. A requirement change that costs 1 unit during requirements analysis costs approximately 5–10 units during design, 10–20 during implementation, and 20–100 during testing or after deployment. This multiplier is the economic argument for thorough upfront requirements work — not to eliminate change, but to discover changes before they compound.
Requirements Management in Practice
Managing requirements across a project’s life requires:
- Version control — requirements documents or backlog states should be versioned alongside code
- Impact analysis — every proposed change triggers assessment of affected design, test, and dependent requirements
- Change control board (CCB) — a governance body that evaluates, approves, or defers changes based on cost-benefit analysis
- Traceability maintenance — links from requirements to design and tests must be kept current; stale traceability is worse than none, because it creates false confidence
Requirements management is not bureaucracy for its own sake — it is the mechanism that prevents the gradual decay of shared understanding between stakeholders and development team that causes so many projects to deliver the wrong thing on time.