MSE 302: Engineering Design

Ada Hurst

Estimated study time: 26 minutes

Table of contents

Sources and References

Primary textbook — Bella Martin & Bruce Hanington, Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions (Rockport Publishers). Supplementary texts — Clive Dym, Patrick Little, Elizabeth Orwin Engineering Design: A Project-Based Introduction; Karl Ulrich & Steven Eppinger Product Design and Development; Nigel Cross Engineering Design Methods; Tim Brown Change by Design; Don Norman The Design of Everyday Things; Ezio Manzini Design, When Everybody Designs. Online resources — Stanford d.school open materials; MIT OpenCourseWare 2.009 “Product Engineering Processes”; IDEO design thinking toolkit.

1. What Is Engineering Design?

Engineering design is the systematic, often iterative, activity of converting a poorly understood human need into a technical artifact, system, or service that can be built, used, and maintained. It sits at the intersection of problem finding and problem solving: before anything can be optimized, the right problem has to be formulated, and that formulation is itself a creative act. Dym and Little define engineering design as “a thoughtful process for generating plans for devices, structures, systems, or products intended to achieve a specified objective while satisfying stipulated constraints.” The emphasis on both objectives and constraints distinguishes design from pure science (which seeks truth) and pure art (which seeks expression).

Two mental models dominate contemporary practice. The first is the Double Diamond, popularized by the UK Design Council: divergent exploration of the problem space (“discover”), convergence on a problem definition (“define”), divergent generation of solutions (“develop”), and convergence on a chosen concept (“deliver”). The second is design thinking, articulated by IDEO and Stanford’s d.school as empathize–define–ideate–prototype–test. Both frameworks share three commitments: user-centredness, iteration, and deliberate alternation between expansion and reduction of options.

Engineering design is rarely linear. Nigel Cross describes it as a “conversation with the situation” in which early sketches and prototypes talk back to the designer, revealing misunderstandings and new possibilities. Donald Schön called this reflection-in-action. The practical consequence is that a design process is less a sequence of steps and more a loop of hypothesis, test, and revision.

Finally, engineering design is a social activity. Real projects involve teams, stakeholders, regulators, and end users, each carrying partial knowledge. Good designers treat the process as information orchestration: pulling insight from users, technical domains, and colleagues, then making defensible trade-offs under uncertainty. MSE 302 introduces these habits through small exercises and a term-long need-finding project that feeds directly into the MSE 401/402 capstone sequence.

2. Need Finding and Problem Discovery

Every successful design starts with a real need, yet beginners routinely start with a solution in search of a problem. Need finding is the disciplined practice of suspending solutions long enough to understand what people actually struggle with. Tim Brown argues in Change by Design that the quality of a final product is bounded by the quality of the initial brief; a vague or solution-laden brief produces vague or misdirected work.

A useful opening question is not “what should we build?” but “who is frustrated, and why?” Martin and Hanington describe several methods for surfacing latent needs. Contextual inquiry puts the designer alongside the user in the environment where the activity happens, observing workarounds, errors, and improvisations. Shadowing and fly-on-the-wall observation capture routine behaviour without the distortions of self-report. Diary studies extend observation over days or weeks, revealing patterns that a one-hour interview cannot. Semi-structured interviews trade rigor for depth, using an anchoring script while leaving room to follow surprising threads.

Need finding rewards a particular stance: curiosity without judgment, and an assumption that users are experts in their own lives. Tom and David Kelley, in Creative Confidence, urge beginners to “fall in love with the problem, not the solution” and to collect extreme users alongside average ones because extreme users amplify needs that are invisible in the middle of the distribution.

The output of need finding is a set of raw observations, quotes, photographs, and artifacts. These are not yet insights. Synthesis — the topic of the next chapter — is what turns data into a directed problem statement. A common mistake is to short-circuit this step by declaring a “problem” after a single interview. The discipline of MSE 302’s term-long project is precisely to hold this space open long enough for real needs to emerge, so that later convergence is anchored in evidence rather than assumption.

3. Problem Analysis — Gathering and Synthesizing Information

Once raw material exists, it must be organized into something a team can act on. Synthesis is where observations become insights and insights become opportunity areas. Martin and Hanington catalogue dozens of synthesis tools; a working designer typically learns five or six well and applies them repeatedly.

Affinity diagramming is the workhorse. Observations are written on sticky notes and grouped by semantic similarity until natural clusters emerge, each cluster labeled with a theme. The act of physically moving notes forces designers to articulate why things belong together, surfacing hidden categories. Journey maps plot a user’s experience across time, annotating each stage with actions, thoughts, emotions, and pain points; they expose where frustration spikes and where a design intervention could do the most good. Personas compress interview data into one or two archetypal users with names, goals, and contexts; they give the team a shared referent when debating trade-offs.

Quantitative methods complement these. A stakeholder map identifies every party whose interests touch the project and ranks them by influence and interest. 5 Whys drills beneath a symptom to find root causes. Fishbone (Ishikawa) diagrams organize potential causes by category — materials, methods, machines, measurement, environment, people. Competitive benchmarking catalogues how existing products handle the same problem, exposing conventions, gaps, and opportunities.

The goal of synthesis is not a comprehensive report but a sharpened point of view — a one-sentence reframing of the opportunity in the form “[user] needs [a way to do something] because [surprising insight].” Stanford d.school calls this the POV statement; IDEO calls it a “How Might We” question. Either way, it hands the ideation phase a target that is specific enough to constrain search and open enough to admit many solutions.

Beginners under-invest in synthesis because it feels slower than building. Experienced designers know it is the cheapest place to fix a project: a confused problem statement is expensive to repair downstream.

4. Requirements and Specifications

A sharp problem statement is still not a buildable brief. The next move is to translate user needs into testable requirements and then into quantitative specifications. Ulrich and Eppinger’s product development framework treats this as a distinct stage because ambiguity about requirements is the single largest source of rework.

The translation proceeds in layers. A user need is expressed in the user’s language: “I want to carry it easily.” A requirement rewrites the need in neutral designer-facing terms: “the device shall be portable by a single adult.” A specification attaches a measurable metric and target: “mass ≤ 2.5 kg; longest dimension ≤ 40 cm.” Each specification has a metric, a unit, an ideal value, and a marginally acceptable value. Without the second of these, teams cannot tell when a trade-off has gone too far.

A useful discipline is to write requirements in SMART form — specific, measurable, achievable, relevant, time-bounded — and to distinguish functional from non-functional requirements. Functional requirements describe what the system does; non-functional requirements describe how well it must do it (safety, reliability, usability, maintainability, cost). Safety and regulatory constraints belong in a separate constraints list because they cannot be traded away.

The House of Quality from Quality Function Deployment maps customer needs on the left, engineering characteristics across the top, and importance-weighted relationships in the cells. Its roof encodes correlations between engineering characteristics, exposing coupling that will later cause trade-offs. Even an informal version of the exercise produces two benefits: it forces every stated need to have at least one measurable engineering surrogate, and it forces every engineering parameter to justify its existence by mapping to a real need.

Good specifications are tight enough to discriminate between concepts but loose enough to avoid prescribing the solution. “The package must include a zipper” is bad; “the package must be openable one-handed by a user wearing winter gloves” is good. The former forecloses design space; the latter opens it.

5. Conceptual Design and Idea Generation

Conceptual design is where search is deliberately widened. The goal is not to find the answer but to generate many candidate answers, from which the best can later be chosen. Nigel Cross and Karl Ulrich both emphasize that the quality of a final concept correlates strongly with the breadth of the initial idea set: teams that generate twenty concepts typically outperform teams that generate five.

Brainstorming, when done well, follows rules codified by IDEO: defer judgment, encourage wild ideas, build on the ideas of others, stay focused on the topic, one conversation at a time, be visual, go for quantity. These rules exist because group ideation has a known failure mode — early criticism that shuts down risk-taking. The rules protect divergence long enough for the weird, promising ideas to surface.

Structured methods extend pure brainstorming. Morphological analysis decomposes a problem into sub-functions, lists alternatives for each, and combines them into concept candidates — an excellent way to cover solution space systematically. SCAMPER (substitute, combine, adapt, modify, put to other uses, eliminate, reverse) applies transformations to existing solutions to generate variants. Analogical reasoning borrows mechanisms from distant domains: Velcro from burdock burrs, bullet trains from kingfisher beaks. Bodystorming physically enacts the user’s situation to surface embodied constraints that a whiteboard misses.

Sketching is the native language of conceptual design. Quick hand drawings carry just enough information to communicate intent while remaining cheap to discard. A common rule of thumb: if a sketch takes more than two minutes, it is too detailed for this stage. Paper prototypes, foam-core mock-ups, and storyboards serve the same role for services and interactions. The point is to externalize thought quickly so that the team can react to something concrete.

Ideation is often the most enjoyable phase of a project, and it is tempting to linger. The discipline is to produce a diverse set of viable concepts and then move on to evaluation, resisting both premature commitment and endless exploration.

6. Evaluation and Selection of Concepts

When the divergent phase closes, the team faces a harder problem: choosing. Evaluation is where requirements earn their keep, because defensible selection requires explicit criteria agreed in advance. Without prior criteria, selection collapses into politics or personal taste.

The most common tool is the Pugh concept selection matrix. Rows are evaluation criteria drawn from the requirements list; columns are candidate concepts; one concept is chosen as a baseline and each other concept is scored plus, minus, or same against the baseline on each criterion. Simple summation identifies strong and weak candidates and, more importantly, exposes which criteria drive the outcome. A second round hybridizes strong features from different concepts and reruns the matrix.

For finer discrimination, weighted decision matrices assign numerical importance to each criterion and compute weighted scores. The weights should be debated openly before any concept is scored; reverse-engineering weights to justify a preferred concept is a classic failure mode. Analytic Hierarchy Process formalizes weight elicitation through pairwise comparisons when stakeholders disagree.

Beyond matrices, engineering evaluation relies on feasibility analysis — back-of-envelope calculations that check whether physics, economics, and schedule allow the concept at all. Ulrich and Eppinger recommend computing rough cost, key performance bounds, and manufacturing implications for every serious candidate. Concepts that fail feasibility should be dropped immediately regardless of matrix scores.

A subtler criterion is risk. A concept that scores slightly lower but relies on proven components may beat a higher-scoring concept that depends on unobtainium. Dym and Little encourage teams to list the top three risks per concept and estimate severity and likelihood, folding the result back into selection.

Evaluation should end with a written selection rationale that any team member could defend in a design review. The rationale makes the decision reversible: if new information arrives later, the team can revisit the specific premise that is now wrong, rather than relitigating the entire choice.

7. Design for Sustainability

Contemporary engineering design cannot ignore environmental, social, and economic sustainability. A product that works technically but harms ecosystems, communities, or long-term economics is a failed design in a fuller sense. Ezio Manzini’s Design, When Everybody Designs and McDonough and Braungart’s Cradle to Cradle argue that sustainability should be treated as a first-class design objective, not a late-stage compliance checklist.

Environmental sustainability begins with a life-cycle view. Materials are extracted, processed, manufactured, distributed, used, and disposed of, and each phase carries energy, water, and emissions. Life Cycle Assessment (LCA) quantifies these flows; even a simplified “streamlined LCA” can expose surprises, such as a product whose use phase dwarfs its manufacturing footprint or vice versa. Design levers include material selection (recycled, renewable, or benign), dematerialization (using less material for the same function), durability, repairability, and end-of-life strategies such as recycling, composting, or remanufacturing. Cradle to Cradle goes further, arguing that “waste equals food” — products should be designed so that every material ends up either in a biological or a technical nutrient cycle.

Social sustainability asks whether a design respects the dignity, safety, and agency of everyone it touches. This includes labour conditions in the supply chain, accessibility for users with disabilities, cultural appropriateness, and the avoidance of harms such as surveillance, addiction, or exclusion. Don Norman’s Design of Everyday Things reminds designers that unusable products cause real frustration and even injury; usability is therefore a social-sustainability concern, not only an ergonomic one.

Economic sustainability means the design can be produced, sold, maintained, and ultimately retired within a viable business model. A beautiful prototype that cannot be afforded at scale by its intended users is not sustainable. Total cost of ownership — purchase, energy, maintenance, disposal — should be traced across a realistic lifespan.

The practical move is to fold sustainability criteria into the requirements list from the start. Questions such as “what happens at end of life?” and “who is harmed if this scales?” should appear next to performance and cost in the decision matrix, forcing explicit trade-offs rather than accidental externalities.

8. Designing in Teams

Most engineering design is done in teams because no individual carries all the required expertise. Teams, however, are not automatically effective. A productive design team needs deliberate structure, clear communication norms, and mutual accountability.

Bruce Tuckman’s forming-storming-norming-performing model remains a useful map of team dynamics. Early meetings feel polite and tentative (forming); disagreements surface as members test roles and standards (storming); shared norms emerge (norming); and only then does the team produce at its best (performing). Skipping the storming phase by avoiding conflict merely defers it to a worse moment. MSE 302 teams are encouraged to write a team contract early, specifying meeting cadence, decision rules, communication channels, conflict-resolution procedures, and how work will be divided and reviewed.

Roles rotate rather than calcify. Typical rotating roles include facilitator, note-taker, timekeeper, and design-review lead. Rotating exposes each member to the responsibilities of coordination and prevents the accumulation of invisible labour on one person. Assigning owners for each subsystem preserves accountability: every requirement and every deliverable should have exactly one name next to it.

Research on high-performing teams (Google’s Project Aristotle among others) identifies psychological safety — the shared belief that the team is safe for interpersonal risk-taking — as the strongest predictor of effectiveness. Safety is built by small, repeated behaviours: acknowledging uncertainty, inviting dissent, thanking people who raise bad news, and treating mistakes as information rather than blame. The CDIO approach to engineering education argues that teamwork must be taught explicitly, not left to emerge from group projects.

Common team failure modes to watch for: groupthink (unanimity suppresses good critique), social loafing (effort disappears into collective output), diffusion of responsibility (nobody owns a failing task), and premature convergence (the first plausible idea is adopted to end discomfort). The countermeasures are procedural: anonymous voting, explicit devil’s advocacy, clear task ownership, and protected divergent phases in meetings.

9. Managing Design Projects

A design project is a time-bounded effort to produce something specific under constraints of scope, schedule, budget, and quality. Managing it well means making uncertainty visible and decisions traceable. Ulrich and Eppinger describe the product development process in terms of gates — reviews at which the team demonstrates that it is ready to commit more resources — and this staging logic is useful even for small academic projects.

Planning starts with a work breakdown structure: decomposing the project into tasks small enough to estimate (roughly one to three days of effort). Dependencies between tasks produce a network that can be laid out as a Gantt chart for calendar planning or a critical path diagram for identifying the sequence that determines the overall duration. The critical path is worth knowing because slippage on it delays the project; slippage off it usually does not.

Estimation is difficult because designers are optimistic. The useful habits are to estimate in ranges rather than points, to track actuals against estimates so that the team’s personal calibration improves, and to hold an explicit schedule reserve of perhaps 20% for discoveries. Weekly standups of fifteen minutes keep progress visible without becoming a tax.

Risk management is a continuous companion to scheduling. A simple risk register lists identified risks, their likelihood, severity, and a mitigation or contingency plan. Risks should be reviewed at every major milestone and updated, not filed and forgotten. Dym and Little note that technical risks (will it work?), schedule risks (will it be ready?), and integration risks (will the subsystems fit together?) deserve separate attention because their mitigations differ.

Documentation is the memory of the project. A design notebook, dated and signed, records decisions and their rationale so that future members — including the same person three weeks later — can reconstruct why something was chosen. Version-controlled CAD files, labeled prototypes, and archived test data protect against the worst kind of project loss, which is not deletion but amnesia. For MSE 302 teams, the habit of disciplined documentation pays off enormously when the project transitions into the MSE 401/402 capstone sequence.

10. Design Communication — Drawings, Diagrams, and Reports

A design that cannot be communicated cannot be built, approved, or improved. Engineering design produces four main genres of communication — drawings, diagrams, written reports, and oral reviews — and each serves a different audience under different constraints.

Drawings range from loose concept sketches to dimensioned engineering drawings. Concept sketches prioritize speed and ambiguity; their job is to provoke discussion. Engineering drawings prioritize precision and unambiguity; their job is to instruct fabrication. Between these extremes sit working sketches, exploded views, and rendered illustrations that support design reviews. Fluency with hand sketching remains valuable because the cost of a sketch governs how many alternatives a designer will consider.

Diagrams make system structure visible. A functional block diagram shows how sub-functions connect, independent of physical embodiment. A flow diagram follows a material, signal, or energy through the system. A state diagram captures how a product responds to different inputs over time. A user journey map is itself a diagram: a temporal structure that makes experience legible. Good diagrams reward careful labelling, consistent notation, and the removal of anything that is not load-bearing for the argument being made. Edward Tufte’s dictum — “above all else show the data” — applies equally to engineering diagrams.

Reports structure persuasion. A typical design report follows executive summary, introduction, needs analysis, requirements, concept generation, concept selection, detailed design, verification, and conclusions. The executive summary is read by everyone; the rest is read selectively. Each section should answer questions that a reasonable reviewer would ask, with evidence attached. Appendices hold the raw material — interview transcripts, calculations, code — that backs the narrative without interrupting it.

Oral design reviews compress the project into twenty minutes. The key moves are to lead with the problem (not the solution), to show only evidence that changes what the audience should believe, and to invite hard questions rather than defending against them. Reviews are cheapest when the team treats them as free consulting from senior eyes rather than as judgments to survive.

Across all four genres, the underlying question is the same: what does my audience need to know to make the next decision? Communication that answers this question efficiently earns trust, and trust is the currency that buys time and resources for the rest of the project.

11. AI as a Design Partner

Artificial intelligence tools have moved from curiosity to day-to-day companion in engineering design practice. Used thoughtfully, large language models, image generators, and specialized CAD copilots can compress research time, suggest alternatives, check calculations, and draft documentation. Used carelessly, they introduce fabrication, shallow thinking, and intellectual-property and privacy risks. MSE 302 treats AI as a tool class that deserves the same critical stance as any other: know what it does well, know where it fails, and verify before you trust.

Where AI helps most is in phases that benefit from quick breadth. In need finding, an LLM can help structure interview guides, cluster notes into preliminary themes, and translate transcripts — though it cannot replace real contact with users. In problem analysis, it can summarize prior art, suggest analogous domains, and critique a draft problem statement. In ideation, it can produce a hundred candidate concepts in minutes, functioning as an always-available brainstorming partner that never gets tired and never takes offense at weird suggestions. In evaluation, it can draft Pugh matrices and surface trade-off questions the team might have missed. In communication, it can polish writing, generate diagram drafts, and rehearse review questions.

Where AI hurts most is when designers outsource the hard thinking. LLMs fabricate references, misstate physics, and echo the user’s assumptions back as insight. An AI-generated list of requirements is not a substitute for observing real users; an AI-generated concept list is not a substitute for understanding why each concept would or would not work. The failure mode is convincing-looking output that nobody on the team can defend in a review.

Good practice treats AI like a capable but inexperienced junior engineer. Every output is checked against sources, calculations, and common sense. Prompts are written to elicit reasoning rather than just answers, so that the chain of thought can be audited. Sensitive information — user identities, proprietary data, safety-critical analyses — is kept out of external tools unless the privacy model is understood. Finally, the team retains authorship and accountability: AI can draft, but humans decide and sign.

A useful heuristic: if you cannot explain, in your own words, why the AI’s suggestion is correct, you are not ready to use it. If you can, it has done its job of accelerating, not replacing, your thinking.

12. Verification, Validation, and Testing

Two questions dominate the late phases of a project: did we build the thing right? and did we build the right thing? The first is verification — confirming that the artifact meets its specifications. The second is validation — confirming that meeting those specifications actually satisfies the original user need. A design can be verified yet invalid if the specifications themselves were wrong.

Verification is usually decomposable. Each engineering specification becomes a test with a defined procedure, measurement, pass criterion, and record. “Mass ≤ 2.5 kg” becomes “weigh on calibrated scale; record to 10 g; pass if ≤ 2.5 kg.” A verification matrix tracks which specifications have been tested, by which method, with what result, and who signed off. Gaps in the matrix are the team’s remaining work. Techniques range from analysis (calculation, simulation) and inspection (visual, dimensional) to demonstration (the system performs a task in front of witnesses) and test (instrumented measurement under controlled conditions).

Validation is harder because user needs are harder to measure than engineering parameters. The main tools are usability testing, field trials, A/B comparisons, and structured user interviews after exposure to a working prototype. Nielsen’s classic rule — that five users catch roughly 85% of usability issues — argues for frequent small tests rather than rare large ones. Validation should ask: did the real problem go away? Did users adopt the new behaviour without prompting? Did the design create unintended harms?

Related disciplines include Failure Mode and Effects Analysis (FMEA), which lists possible failure modes, their causes, effects, and severity, and ranks them by a risk priority number to guide preventive redesign. Design of Experiments (DOE) varies multiple parameters efficiently to learn how they affect performance. Regression testing after design changes ensures that fixes have not broken previously verified behaviour. In software-containing products, unit tests and integration tests automate verification that would otherwise depend on human vigilance.

The underlying attitude matters as much as the tools. Good designers actively try to break their own designs, because every failure caught in the lab is a failure that does not happen in the field. The shift from “I hope this works” to “let me show that this works, and here is how I tried to make it fail” is the hallmark of engineering maturity.

13. From Design Class to Capstone — Scaling Up

MSE 302 ends where MSE 401/402 begins: with a student team that has rehearsed the complete design cycle on a small scale and is ready to take on a larger, open-ended problem. Scaling up is not only a matter of bigger budget and longer schedule. It is a shift in kind, and three differences are worth anticipating.

First, uncertainty grows disproportionately. A term-long project has a horizon short enough that early assumptions usually survive. A two-term capstone is long enough that requirements drift, sponsors change minds, and unexpected technical obstacles appear. The habits introduced in MSE 302 — explicit requirements, traceable decisions, risk registers, schedule reserves, iterative prototyping — exist precisely to make that uncertainty manageable. Teams that kept a clean design notebook in MSE 302 find capstone easier because they already know what to record.

Second, stakeholder landscapes become richer. A capstone sponsor is a real organization with its own politics, priorities, and preferred vocabularies. Faculty advisors bring technical standards. End users may differ from customers who may differ from purchasers. The negotiation skills rehearsed in a team of four scale up to the negotiation skills needed across a team, a sponsor, and an advisor. The CDIO framework treats these “professional skills” — communication, ethics, stakeholder management — as first-class learning outcomes, not extras.

Third, integration becomes the dominant risk. Small projects can usually treat subsystems independently; large projects fail at interfaces. A mechanical assembly that works in isolation may fight the embedded control loop; a beautiful user interface may not survive the data model underneath. Capstone teams who learned early to write interface specifications, to integrate often, and to prototype the whole system before polishing any part, arrive at their final presentation with working demonstrations rather than apologies.

The through-line from this course to capstone, and from capstone to professional practice, is the same set of habits: start from real needs, formulate problems explicitly, generate widely before converging, make decisions on evidence and record them, test relentlessly, communicate clearly, and treat teammates and users with respect. The methods collected in Martin and Hanington’s Universal Methods of Design and the frameworks in Dym, Cross, and Ulrich are not a checklist to complete but a toolkit to reach into as the situation demands. The designer’s real skill is knowing which tool the next decision calls for — and engineering design courses exist to let that judgment be built cheaply, on small problems, before it has to be deployed on large ones.

Back to top